\documentclass{llncs} \usepackage{times} \usepackage{helvet} \usepackage{courier} \usepackage{amssymb} \usepackage{amsmath} \usepackage{txfonts} \usepackage{amssymb} \usepackage{enumerate} \usepackage{amsfonts} \usepackage{times} \usepackage{mathrsfs} \usepackage{amscd} \usepackage{stmaryrd} \usepackage{graphicx} \usepackage{makeidx} \usepackage{multirow} \renewcommand{\baselinestretch}{1} \renewcommand{\arraystretch}{1} \newtheorem{propn}{Proposition} \newtheorem{algorithm}{Algorithm} \newtheorem{assumption}{Assumption} \newcommand{\trans}{\longrightarrow} \newcommand{\dist}[1]{{\mathcal D}(#1)} \newcommand{\powerset}[1]{{\mathcal P}(#1)} \newcommand{\activation}{\alpha} \newcommand{\rimp}{\Rightarrow} \newcommand{\nat}{{\mathbb N}} \newcommand{\real}{{\mathbb R}} \newcommand{\DataType}{{\mathbb D}} \newcommand{\LayerFunc}{{\mathbb F}} \newcommand{\layer}{L} \newcommand{\activations}{acts} \newcommand{\manipulation}{\delta} \newcommand{\manipulationset}{\Delta} \newcommand{\legal}{{V}} \newcommand{\ladder}{ld} \newcommand{\ladderset}{{\cal L}} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} \usepackage{color} \usepackage{colortbl} \newcommand{\todo}[1]{{\color{red} {[}{#1}{]}}} \begin{document} \title{Safety Verification of Deep Neural Networks\thanks{This work is supported by the EPSRC Programme Grant on Mobile Autonomy (EP/M019918/1). Part of this work was done while MK was visiting the Simons Institute for the Theory of Computing.}} \author{Xiaowei Huang, Marta Kwiatkowska, Sen Wang and Min Wu} \institute{Department of Computer Science, University of Oxford} \maketitle \begin{abstract} Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. We develop a novel automated verification framework for feed-forward multi-layer neural networks based on Satisfiability Modulo Theory (SMT). We focus on safety of image classification decisions with respect to image manipulations, such as scratches or changes to camera angle or lighting conditions that would result in the same class being assigned by a human, and define safety for an individual decision in terms of invariance of the classification within a small neighbourhood of the original image. We enable exhaustive search of the region by employing discretisation, and propagate the analysis layer by layer. Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations. If found, adversarial examples can be shown to human testers and/or used to fine-tune the network. We implement the techniques using Z3 and evaluate them on state-of-the-art networks, including regularised and deep learning networks. We also compare against existing techniques to search for adversarial examples and estimate network robustness. \end{abstract} \section{Introduction} Deep neural networks have achieved impressive experimental results in image classification, matching the cognitive ability of humans~\cite{LBH2015} in complex tasks with thousands of classes. Many applications are envisaged, including their use as perception modules and end-to-end controllers for self-driving cars \cite{NVIDIA2016}. Let $\real^n$ be a vector space of images (points) that we wish to classify and assume that $f: \real^n \to C$, where $C$ is a (finite) set of class labels, models the human perception capability, then a neural network classifier is a function $\hat{f}(x)$ which approximates $f(x)$ from $M$ training examples $\{(x^i,c^i)\}_{i=1,..,M}$. For example, a perception module of a self-driving car may input an image from a camera and must correctly classify the type of object in its view, irrespective of aspects such as the angle of its vision and image imperfections. Therefore, though they clearly include imperfections, all four pairs of images in Figure~\ref{fig:automobile} should arguably be classified as automobiles, since they appear so to a human eye. Classifiers employed in vision tasks are typically multi-layer networks, which propagate the input image through a series of linear and non-linear operators. They are high-dimensional, often with millions of dimensions, non-linear and potentially discontinuous: even a small network, such as that trained to classify hand-written images of digits 0-9, has over 60,000 real-valued parameters and 21,632 neurons (dimensions) in its first layer. At the same time, the networks are trained on a finite data set and expected to generalise to previously unseen images. To increase the probability of correctly classifying such an image, regularisation techniques such as dropout are typically used, which improves the smoothness of the classifiers, in the sense that images that are close (within $\epsilon$ distance) to a training point are assigned the same class label. \begin{figure} \parbox{2.9cm}{ \includegraphics[width=1.4cm,height=1.4cm]{images/cifar10/390_original_as_automobile.pdf} \includegraphics[width=1.4cm,height=1.4cm]{images/cifar10/390_automobile_modified_into_bird.pdf} {automobile to bird} } \parbox{2.9cm}{ \includegraphics[width=1.4cm,height=1.4cm]{images/cifar10/204_original_as_automobile.pdf} \includegraphics[width=1.4cm,height=1.4cm]{images/cifar10/204_automobile_modified_into_frog.pdf} {automobile to frog } } \parbox{2.9cm}{ \includegraphics[width=1.4cm,height=1.4cm]{images/cifar10/201automobile.pdf} \includegraphics[width=1.4cm,height=1.4cm]{images/cifar10/201automobileToairplane.pdf} {automobile to airplane } } \parbox{2.9cm}{ \includegraphics[width=1.4cm,height=1.4cm]{images/cifar10/193_original_as_automobile.pdf} \includegraphics[width=1.4cm,height=1.4cm]{images/cifar10/193_automobile_modified_into_horse.pdf} {automobile to horse } } \caption{Automobile images (classified correctly) and their perturbed images (classified wrongly)} \label{fig:automobile} \end{figure} Unfortunately, it has been observed in~\cite{Biggio2013,SZSBEGF2014} that deep neural networks, including highly trained and smooth networks optimised for vision tasks, are unstable with respect to so called \emph{adversarial perturbations}. Such adversarial perturbations are (minimal) changes to the input image, often imperceptible to the human eye, that cause the network to misclassify the image. Examples include not only artificially generated random perturbations, but also (more worryingly) modifications of camera images~\cite{KGB2016} that correspond to resizing, cropping or change in lighting conditions. They can be devised without access to the training set~\cite{practical-blackbox} and are transferable~\cite{DBLP:journals/corr/GoodfellowSS14}, in the sense that an example misclassified by one network is also misclassified by a network with a different architecture, even if it is trained on different data. Figure~\ref{fig:automobile} gives adversarial perturbations of automobile images that are misclassified as a bird, frog, airplane or horse by a highly trained state-of-the-art network. This obviously raises potential safety concerns for applications such as autonomous driving and calls for automated verification techniques that can verify the correctness of their decisions. Safety of AI systems is receiving increasing attention, to mention \cite{DBLP:journals/corr/SeshiaS16,DBLP:journals/corr/AmodeiOSCSM16}, in view of their potential to cause harm in safety-critical situations such as autonomous driving. Typically, decision making in such systems is either solely based on machine learning, through end-to-end controllers, or involves some combination of logic-based reasoning and machine learning components, where an image classifier produces a classification, say speed limit or a stop sign, that serves as input to a controller. A recent trend towards ``explainable AI'' has led to approaches that learn not only how to assign the classification labels, but also additional explanations of the model, which can take the form of a justification explanation (why this decision has been reached, for example identifying the features that supported the decision)~\cite{Hendricks2016,016:WIT:2939672.2939778Ribeiro:2}. In all these cases, the safety of a decision can be reduced to ensuring the correct behaviour of a machine learning component. However, safety assurance and verification methodologies for machine learning are little studied. The main difficulty with image classification tasks, which play a critical role in perception modules of autonomous driving controllers, is that they do not have a formal specification in the usual sense: ideally, the performance of a classifier should match the perception ability and class labels assigned by a human. Traditionally, the correctness of a neural network classifier is expressed in terms of \emph{risk}~\cite{Vapnik91}, defined as the probability of misclassification of a given image, weighted with respect to the input distribution $\mu$ of images. Similar (statistical) robustness properties of deep neural network classifiers, which compute the average minimum distance to a misclassification and are independent of the data point, have been studied and can be estimated using tools such as DeepFool~\cite{fross-pract} and cleverhans~\cite{papernot2016cleverhans}. However, we are interested in the safety of an \emph{individual decision}, and to this end focus on the key property of the classifier being \emph{invariant} to perturbations \emph{at a given point}. This notion is also known as pointwise robustness~\cite{fross-theory,constraints} or local adversarial robustness~\cite{KBDJK2017}. {\bf Contributions.} In this paper we propose a general framework for automated verification of safety of classification decisions made by feed-forward deep neural networks. Although we work concretely with image classifiers, the techniques can be generalised to other settings. For a given image $x$ (a point in a vector space), we assume that there is a (possibly infinite) region $\eta$ around that point that incontrovertibly supports the decision, in the sense that all points in this region must have the same class. This region is specified by the user and can be given as a small diameter, or the set of all points whose salient features are of the same type. We next assume that there is a family of operations $\manipulationset$, which we call manipulations, that specify modifications to the image under which the classification decision should remain invariant in the region $\eta$. Such manipulations can represent, for example, camera imprecisions, change of camera angle, or replacement of a feature. We define a network decision to be \emph{safe} for input $x$ and region $\eta$ with respect to the set of manipulations $\manipulationset$ if applying the manipulations on $x$ will not result in a class change for $\eta$. We employ discretisation to enable a \emph{finite} \emph{exhaustive} search of the high-dimensional region $\eta$ for adversarial misclassifications. The discretisation approach is justified in the case of image classifiers since they are typically represented as vectors of discrete pixels (vectors of 8 bit RGB colours). To achieve scalability, we propagate the analysis \emph{layer by layer}, mapping the region and manipulations to the deeper layers. We show that this propagation is sound, and is complete under the additional assumption of minimality of manipulations, which holds in discretised settings. In contrast to existing approaches~\cite{SZSBEGF2014,PMJFCS2015}, our framework can guarantee that a misclassification is found if it exists. Since we reduce verification to a search for adversarial examples, we can achieve safety \emph{verification} (if no misclassifications are found for all layers) or \emph{falsification} (in which case the adversarial examples can be used to fine-tune the network or shown to a human tester). We implement the techniques using Z3~\cite{z3} in a tool called DLV (Deep Learning Verification)~\cite{DLV} and evaluate them on state-of-the-art networks, including regularised and deep learning networks. This includes image classification networks trained for classifying hand-written images of digits 0-9 (MNIST), 10 classes of small colour images (CIFAR10), 43 classes of the German Traffic Sign Recognition Benchmark (GTSRB) \cite{Stallkamp2012} and 1000 classes of colour images used for the well-known imageNet large-scale visual recognition challenge (ILSVRC) \cite{ILSVRC}. We also perform a comparison of the DLV falsification functionality on the MNIST dataset against the methods of~\cite{SZSBEGF2014} and \cite{PMJFCS2015}, focusing on the search strategies and statistical robustness estimation. The perturbed images in Figure~\ref{fig:automobile} are found automatically using our tool for the network trained on the CIFAR10 dataset. This invited paper is an extended and improved version of~\cite{HKWW2016}, where an extended version including appendices can also be found. \section{Background on Neural Networks} We consider feed-forward multi-layer neural networks~\cite{bishop1995neural}, henceforth abbreviated as neural networks. Perceptrons (neurons) in a neural network are arranged in disjoint layers, with each perceptron in one layer connected to the next layer, but no connection between perceptrons in the same layer. Each layer $L_k$ of a network is associated with an $n_k$-dimensional vector space $D_{L_{k}} \subseteq \real^{n_k}$, in which each dimension corresponds to a perceptron. We write $P_k$ for the set of perceptrons in layer $L_k$ and $n_k=|P_k|$ is the number of perceptrons (dimensions) in layer $L_k$. Formally, a \emph{(feed-forward and deep) neural} network $N$ is a tuple $(L,T,\Phi)$, where $L=\{L_k~|~k\in \{0,...,n\}\}$ is a set of layers such that layer $L_0$ is the \emph{input} layer and $L_n$ is the \emph{output} layer, $T\subseteq L\times L$ is a set of sequential connections between layers such that, except for the input and output layers, each layer has an incoming connection and an outgoing connection, and $\Phi=\{\phi_k~|~k\in \{1,...,n\}\}$ is a set of \emph{activation functions} $\phi_k: D_{L_{k-1}} \to D_{L_{k}}$, one for each non-input layer. Layers other than input and output layers are called the \emph{hidden} layers. The network is fed an input $x$ (point in $D_{L_{0}}$) through its input layer, which is then propagated through the layers by successive application of the activation functions. An \emph{activation} for point $x$ in layer $k$ is the value of the corresponding function, denoted $\activation_{x,k}=\phi_k(\phi_{k-1}(...\phi_1(x))) \in D_{L_k}$, where $\activation_{x,0}=x$. For perceptron $p \in P_k$ we write $\alpha_{x,k}(p)$ for the value of its activation on input $x$. For every activation $\activation_{x,k}$ and layer $k'