% This is LLNCS.DOC the documentation file of
% the LaTeX2e class from Springer-Verlag
% for Lecture Notes in Computer Science, version 2.4
\documentclass{llncs}
\usepackage{llncsdoc}

%asdfagaweaf

\usepackage{cite}
%\usepackage{caption}
\usepackage{array}
\usepackage{algorithm}
\usepackage{algorithmic}
%\usepackage{algpseudocode}
\usepackage{multirow}
\usepackage{amsmath}
\usepackage{xcolor}
%\usepackage{graphics}
%\usepackage{graphicx}
\usepackage{epsfig}
\usepackage{graphicx}
\usepackage{subfigure}
\usepackage{wrapfig}
\usepackage[title,titletoc,header]{appendix}

\begin{document}

\title{The Article}

\author{Anonymous authors for review}
%\author{Chu Huang\inst{1} \and Sencun Zhu \inst{2} \and  Robert Erbacher \inst{3}}

%\institute{School of Information Science and Technology, Penn State University \and Department of Computer Science and Engineering, Penn State University  \and U.S. Army Research Laboratory(ARL)}

%\email{cuh171@ist.psu.edu \and szhu@cse \ and robert.f.erbacher.civ@mail.mil}

\maketitle
%\%markboth{\LaTeXe{} Class for Lecture Notes in Computer
%Science}{\LaTeXe{} Class for Lecture Notes in Computer Science}
%\thispagestyle{empty}
%\begin{flushleft}
%\LARGE\bfseries Instructions for Authors\\
%Coding with \LaTeX\\[2cm]
%\end{flushleft}
%\rule{\textwidth}{1pt}
%\vspace{2pt}
%\begin{flushright}
%\Huge
%\begin{tabular}{@{}l}
%\LaTeXe{} Class\\
%for Lecture Notes\\
%in Computer Science\\[6pt]
%{\Large Version 2.4}
%\end{tabular}
%\end{flushright}
%\rule{\textwidth}{1pt}
%\vfill
%\begin{flushleft}
%\large\itshape
%\begin{tabular}{@{}l}
%{\Large\upshape\bfseries Springer}\\[8pt]
%Berlin\enspace Heidelberg\enspace New\kern0.1em York\\[5pt]
%Barcelona\enspace Budapest\enspace Hong\kern0.2em Kong\\[5pt]
%London\enspace Milan\enspace Paris\enspace\\[5pt]
%Santa\kern0.2em Clara\enspace Singapore\enspace Tokyo
%\end{tabular}
%\end{flushleft}
%\newpage
%
%\section*{For further information please contact us:}
%
%\begin{flushleft}
%\begin{tabular}{l@{\quad}l@{\hspace{3mm}}l@{\qquad}l}
%$\bullet$&\multicolumn{3}{@{}l}{\bfseries LNCS Editorial Office}\\[1mm]
%&\multicolumn{3}{@{}l}{Springer-Verlag}\\
%&\multicolumn{3}{@{}l}{Computer Science Editorial}\\
%&\multicolumn{3}{@{}l}{Tiergartenstra▀e 17}\\
%&\multicolumn{3}{@{}l}{69121 Heidelberg}\\
%&\multicolumn{3}{@{}l}{Germany}\\[0.5mm]
% & Tel:       & +49-6221-487-8706\\
% & Fax:       & +49-6221-487-8588\\
% & e-mail:    & \tt lncs@springer.com    & for editorial questions\\
% &            & \tt texhelp@springer.de & for \TeX{} problems\\[2mm]
%\noalign{\rule{\textwidth}{1pt}}
%\noalign{\vskip2mm}
%
%{\tt svserv@vax.ntp.springer.de}\hfil first try the \verb|help|
%command.
%
%$\bullet$&\multicolumn{3}{@{}l}{\bfseries We are also reachable through the world wide web:}\\[1mm]
%         &\multicolumn{2}{@{}l}{\texttt{http://www.springer.com}}&Springer Global Website\\
 %        &\multicolumn{2}{@{}l}{\texttt{http://www.springer.com/lncs}}&LNCS home page\\
  %       &\multicolumn{2}{@{}l}{\texttt{http://www.springerlink.com}}&data repository\\
   %      &\multicolumn{2}{@{}l}{\texttt{ftp://ftp.springer.de}}&FTP server
%\end{tabular}
%\end{flushleft}


%
%\newpage
%\tableofcontents
%\newpage
%


\begin{abstract}

Heterogeneous architectures are more robust given that an attack can only concentrate on one weakness of each protocol or implementation at a time thus unlikely to disrupt the entire network through a single attack. Following the survivability through heterogeneity philosophy, we present a novel approach to improving survivability of the networked system by adopting the technique of software diversity. Specifically, we design an efficient algorithm to select and deploy a set of off-the-shelf software to hosts in a networked system, such that the number and types of vulnerabilities presented on one host would be different from that on its neighboring ones. In this way, we are able to contain a worm in an isolated``island". In this work, we also take practical constraints and real world resource constraints into consideration, given that different hosts may have diverse requirements based on different system prerequisites. Finally, we evaluate the performance of our strategy through simulations on both simple and complex system models. The simulation results confirm the effectiveness of our methods. Based on the simulation results, we find that the level of heterogeneity our algorithm can actually create depends on the radio of the number of software installed to the total number of available software.
\end{abstract}


\section{Introduction}

With the fast advancement of nowadays information technology, organizations are becoming ever more dependent on interconnected systems for carrying on everyday tasks. However, the pervasive interdependence of such infrastructure increases the risk of being attacked and thus poses numerous challenges to system security. One major problem for such networked environments is software monoculture \cite{lala2009monoculture, stamp2004risks}, which actually could be the cause of serious security threats and vulnerabilities. Running on the risk of exposing a weakness that is common to all of its components, such homogeneous systems facilitate the spread of attacks and enable large-scale exploitations that could easily result in overall crash. Considering the consequences of software monoculture in intensively connected systems, there is an urgent need to control the damage of automated attacks that take advantage of the connectivity information of the networked system. 


In contrast to a system with homogeneous components, heterogeneous architectures are expected to have higher survivability \cite{zhang2001heterogeneous, yang2008improving, o2004achieving}. This point is very much like the maintenance of genetic and ecosystem diversity in biology. In the biological world, no two individuals are the same. The variability allows at least a portion of species to survive an epidemic. Inspired by such phenomena of biodiversity, in this study we propose a software diversity-based approach to address the problem of survivability in the complex networked systems under automated attacks. 

Although a large number of works have been proposed aim to improve system resilience and survivability under various attacks, they cannot fully meet four highly desired requirements: (R1) \textit{Resistance} to automated attacks in the networked system; (R2) The \textit{unpredictability/movability} of the environment;  (R3) \textit{Practicability}. To see how existing approaches are limited in meeting these three requirements, as we will review shortly in the next section, we first classify existing software diversity techniques into three main categories: software diversity at the system level, software diversity at network level and N-version programming. Here we may briefly summarize the limitations of these approaches in terms of the four requirements as follows. 1) Diversity at the system level can be achieved mainly through randomization techniques. These kinds of approaches are only limited to individual machines and it is not clear if can be extended to improve the survivability of the networked system as a whole. 2) Diversification methods at the network level compensate the limitations of the former approach, but it still suffers from the problem of only considering version assignment of software on the host. However, in the real world scenarios, a host (i.e., commodity PC) typically is required to install with more than one software (i.e., operating system, web browser, email client, office suite applications, etc.) in order to perform particular tasks. Thus this method is not practical enough to be applied to common networked system (i.e., enterprise intranet). 3) N-version programming achieves higher system survivability depending on its underlying multiple-version software units that tolerate software faults. Being first called “redundant programming” \cite{avizienis1995methodology}, this method has mainly been adopted for critical and special purpose cases due to the high computational cost involved and is not practical enough to be used routinely in real world organizations.

Given all above limitations of the current studies, in this work we propose a location-based software allocation strategy based on graph coloring. The idea of our approach is motivated by the survivability through heterogeneity philosophy. Specifically, we create a diversified environment  by assigning appropriate set of off-the shelf software to each host in the networked system. By doing that, our algorithm can successfully break down the originally connected network into isolated clusters, which can effectively restrains attacks from propagation. Take the graph in Figure 1 as an example, in which there are 14 machines represented by nodes and 5 distinct vulnerabilities represented by different colors. Given that an attack can only propagate by exploring one type of vulnerability (color), in this graph, we use solid lines to connect nodes with the same color depicting all possible attack-spreading paths, whereas use dashed lines to link between different colored nodes denoting all non-spreading connections. By applying our algorithm on this graph, we find that even in the worst-case scenario a successful attack can only compromised three machines (in red) at most. This indicates that by optimally assigning software to connected machines in a non-adjacent manner, our algorithm can effectively reduce the epidemic effect of an attack. 


\setlength{\intextsep}{3pt plus 2.5pt minus 1.5pt}
\setlength{\textfloatsep}{3pt plus 2.5pt minus 1.5pt}
               
\begin{figure}[!ht]
\setlength{\abovecaptionskip}{-0.1cm}
	\setlength{\belowcaptionskip}{-0.4cm}
	%\setlength{\intextsep}{-1cm}
\begin{center}
\includegraphics[width=0.45\textwidth]{images/example.png}
\caption{\small{Network topology utilizing a diverse software distribution}}
\label{fig:random graph}
\end{center}
\end{figure}


Our software allocation method surpasses previous works by taking all above-mentioned requirements into consideration. First, by assigning appropriate software to hosts considering the network connectivity, our approach enhances the system's resistance to automated attacks. In addition, given the possibility of accommodating our algorithm into a more dynamic environment, our method could further increase the unpredictability of the system and thus better improve survivability of the system. Compared with the prior methods, our algorithm considers the real world constraints on resource allocation, including both host constraints and software constraints, into consideration, and ensures the practicability. Besides, we demonstrate via simulations that the performance of our software assignment algorithm is better than that of previous ones, and is very close to the optimum solution. Last but not least, through experiment we find that the level of heterogeneity in our algorithm depends on the radio of the number of software installed to the total number of available software. We have also identified critical points for each representative topologies. Our findings could serve as a guideline for balance the trade-off between survivability and cost. 


%The rest of the paper is organized as follows. In Section 2, we summarized the works related to ours. In Section 3, we give an overview of software diversity approach based on graph coloring on an abstracted the system model. In Section 4, we briefly discuss the idea of accommodating software shuffling in order to further improve the survivability. Section 5 shows the experiment results. Finally, we conclude the paper in Section 6.


\section{Related Work}

We have reviewed the state-of-art approaches based on the principle of software diversity. These studies can be classified into three main categories: diversity at the system level, diversity at the network level and N-version programming.

Diversity has been applied at the system level. This kind of methods consist essentially randomizations of address space layout, instruction set and data. Address space layout randomization (ASLR), as the most successful technique\cite{team2003pax} \cite{kil2006address}, randomizes the base address of each program region: heap, code and stack. It has already been implemented in major operating systems, including OpenBSD, Linux, Windows\cite{whitehouse2007analysis} and MacOS\cite{nguyen2010effectiveness}. Another randomization technique for software transformation is the instruction set randomization (ISR). Portokalidis and Keromytis\cite{keromytis2010global} proposed to obfuscate underlying system's instructions in order to defeat against the code-injection attacks. Their proposed method randomized all binaries with different secret keys so that malicious code introduced by the attackers would fail to execute correctly. Data randomization is another randomized-based approach. By applying different random masks on data in the memory \cite{cadar2008data}, the attempts to write outside objects will be disrupted since attackers cannot determine memory regions that are associated with particular objects. Cowan et al.\cite{cowan2003pointguard} presented an approach that randomizes stored pointer values, as opposed to the locations where objects are stored. The encryption is achieved by XORing pointer values with a random integer mask generated. Above works only rely on attackers' inability to guess a secret key for security. Besides it is not clear software transformation of individual machines lead to overall diversification of the networked system as a whole. 

Diversity at the network level is achieved by using different applications, operating systems, and communication protocols \cite{geer2003cyberinsecurity} \cite{zhang2001heterogeneous} within a networked system. Mont et al.\cite{mont2002towards} introduced a approach to ensure diversity for common, widespread software applications in which diversity is enforced at the installation time by a random selection and deployment of critical software components. Hiltunen et al.\cite{hiltunen2000survivability} have proposed the use of fine-grain customization and dynamic adaptation as key enabling technologies to achieve the goal of survivable system design. O'Donnell and Sethu \cite{o2004achieving} have presented several distributed algorithms for assigning software packages to individual systems. However, their algorithms require high communication overhead when negotiating colors among the nodes. Yang et al.\cite{yang2008improving} in their work highlighted the same diversity idea and applied it in the sensor network with limited choices of software packages. But the effectiveness of their approach on complex networks is still vague. 

N-version programming was introduced in the 1970s in which multiple teams of programmers independently develop functionally equivalent versions of the same program in order to minimize the risk of having the same vulnerabilities in all versions\cite{chen1978n}. The framework monitors the behavior of the variants in order to detect divergence of the results. Recently, several works have been proposed based on automatically generated software variants. Cox et al.\cite{cox2006n} propose a general framework for increasing application security by running in parallel several automatically generated diversified variants of the same program. Orchestra\cite{salamat2009orchestra} runs two versions of the same application with stacks growing in opposite directions, and synchronizes their execution at the level of system calls, raising an alarm if any divergence is detected, which would have been triggered by a stack-based buffer overflow attack. Such redundancy based approach of N-version programming achieves high resistance, but it involves much higher computational cost compared to ours. 


\vspace{-3pt}
\section{Graph Coloring Approach}


\subsection{System Model}

In the following, we use an undirected graph $G=(V,E)$ as the abstraction of a general finite networked system. We define $V = \lbrace 0, 1, \cdots, n-1\rbrace$ as a set of nodes of graph $G$, which denotes all hosts or devices comprising the networked system. Without loss of generality, we assume that $G$ is connected. We also define $E$, as a set of edges between nodes representing inter-machine connections. The connectivity of a graph consisting of $n$ vertices can be represented by an \emph{adjacency matrix} $M = [m_{uv}]$, where a value in $m_{uv}$ indicates an edge from vertex $u$ to vertex $v$. Each cell in the matrix contains either a one (indicating an edge), or a zero (indicating none). For any two distinct vertices $u, v \in V$, an edge between host pairs ($u,v$) indicates that $u$ and $v$ are able to transmit data to each other (e.g., physically directly connected or can communicate through network, e.g. using TCP/UDP). In the rest of this paper, we use terms host, node and vertex interchangeably.


\begin{table}[!ht] \small
  \setlength{\belowcaptionskip}{-0.01cm}
%\renewcommand{\arraystretch}{1.3}
\caption{Summarizes the main notations used in this paper.}   
\label{notations}
\centering
%\begin{center}
\begin{tabular} {|r | p{0.6\textwidth} | }
\hline
         $M = [m_{uv}]$ & adacency matrix of system graph G = (V, E) where $|$$V$$|$ $= n$,  $m_{uv} = 1$ if and only if $(u, v) \in E$, and $m_{uv} = 0$ for all $v \in V$    \\  
         $k$ & the number of distinct software, $k = |S|$  \\
         $c_{1}, c_{2}, \cdots, c_{k}$ & the set of colors  \\
         $H_{i}$ & the set of color assigned to vertex $v_{i}$,  $H_{i} \subseteq \{c_{1}, c_{2}, \cdots, c_{k}\}$\\
         $C = [c_{uv}]$ & constraint matrix of host constraints where $c_{ij} = 1$ if$ v_{i}$ is constrained by color $c_{j}$, otherwise, $c_{ij}$ will be set to 0 \\
         $CSTR_{h}$ & the collection of tuples represents the host coloring constraints \\
         $CSTR_{s}$ & the collection of tuples represents the software constraints \\
         $W = \{w_{1}, w_{2}, \cdots, w_{n}\}$ & the set of weights, where $w_{i}$ indicates the number of colors assigned to vertex $v_{i}$, and $w_{i} \ge 0$ \\
         %$F_{a}$ & Possion distribution, which can be calculated as $f(k; \lambda) = \frac{\lambda^{k}e^{-\lambda}}{k!}$ $(k = 0, 1, 2, \cdots)$. \\
         %$\bigtriangleup t$ & actual time interval between consecutive shuffling events. \\
         %$\rho$ & shuffling controlling value \\ 
     \hline
%\end{tabular} 
%\end{center}
\end{tabular}
\end{table}


In addition to abstracting a networked system using a graph, we also propose some other important concepts that are unique to our study. Considering the characteristics of the system model, we define the concept of \emph{defective edge} as a type of connection whose two endpoints share the same type of vulnerability (with the same type of software installed). Otherwise, an edge is called an \emph{immune edge}. Building on its original sense, a \emph{connected component} is defined as a subgraph, inside which all vertices are connected through defective edges, whereas all its boundary edges are immune ones. In the rest of the paper, we will use the term node, vertex and machine interchangeable. 

\subsection{Graph Coloring Problem}

Considering the similarity between coloring vertices and our software assignment task, we transform our problem into a graph-coloring problem, where each machine is represented by a vertex and each distinct software product (vulnerability) is represented by a color. A defective edge between two vertices with the same color indicates that the exploitation/compromise of one type of vulnerability on one host could lead to the exploitation/compromise of another. We assume that different software do not have the same vulnerability and each one has only one kind of vulnerability; our algorithm works the same if a piece of software has multiple vulnerabilities because it will not affect the nature of each edge in the graph, i.e., defective or immune. Given that the automated attack is being restricted to the connected component, the size of the maximal connected component could be used to indicate the worst-case infection scenario of a worm attack. Thus, if one could effectively limit the size of the maximal connected component, the system survivability could be improved largely. Hence, we claim that the objective of our method becomes: \emph{given a fixed number of colors, by appropriately assigning every node with certain colors, we want to reduce the size of the largest connected component that formed by a same color} .

The validity of our approach is built on top of the assumption that the diverse software is vulnerable only to different exploits, and thus will not be compromised by the same attack. This assumption is confirmed by \cite{han2009effectiveness}, in which it is found that more than $98.5\%$ software have the substitutes and majority of them do not have the same vulnerabilities. 

It is important to note that our coloring problem is differ from the classic graph coloring problem \cite{chaitin1982register}. The classic k-colorable algorithm is NP complete and needs an exponential amount of computation to identify the ultimate solution. In this paper, we present a coloring algorithm that looks for a non-optimal assignment solution, and its running time is proportional to the number of vertices and the number of available colors to be allocated to the vertices. Besides, the mapping of colors to vertices is not one-to-one in our coloring task, which means, a vertex might be assigned multiple colors. For example, some vertices might have only one vulnerability, while the rest have two or even more. Moreover, our coloring rule is different in a way that allows adjacent vertices to share the same colors.


\subsection{Constraints}

In a real world scenario, a number of realistic constraints exist and give rise to different system requirements. For practical purpose, our task takes the real-world constraints into consideration. A constraint is defined as the coloring requirements in a graph. Thus, if a single vertex or pair of vertices is restricted by some coloring requirements, we say that the vertex or vertices is/are constrained. Given the complexity of the practical situations, in this study we define two types of constraints, as discussed below.

\vspace*{-4pt}
\begin{itemize} \itemsep -2pt
\item[-] \textit{Host constraint}: certain hosts must be installed with some specified types of software to perform required functionality;
\item[-] \textit{Software constraint}: a combination of software \emph{must} be or \emph{must not} be assigned to specified hosts simultaneously.
\end{itemize}


\vspace*{-5pt}
Suppose there are some software packages denoted by $S$. Different software products (including different versions) will be represented as distinct colors in our graph, so the number of colors is $k = |S|$. We assume that the colors are randomly ordered with numbers $1, 2, 3, \cdots, k$ before the color assigning algorithm has been initialized. Let $H$ be the set of coloring arguments for each vertex and let $H_{i}$ =\{$c_{1}, c_{2} \cdots c_{k}$\}represent the set of color values assigned to vertex $v_{i}$. 

Under the conditions of the host constraint, we introduce a $n×k$ binary matrix called \emph{constraint matrix} $(C)$, where each row in the constraint matrix represents a vertex (in this case a host) in the graph, while each column represents a color value (in this case a software package). The $ij$-th element $c_{ij}$ in the constraint matrix specifies the constraints on colors $i$ assigned to vertex $j$. Also $c_{ij}$ will be set to 1 if the $i$-th vertex is constrained by color $j$. Otherwise, if there is no such coloring constraint on the $i$-th vertex, $c_{ij}$ will be set to 0. If this matrix happens to be a sparse matrix, for simplicity, we could use a collection of tuples to represent the coloring constraints. For example, $CSTR_{h}$ = \{$(v_{1}$, \{$c_{1}, c_{2}$\}), $(v_{3}$, \{$c_{5}$\})\}, means vertex $v_{1}$ and $v_{3}$ are fixed with colors $c_{1}$, $c_{2}$ and color $c_{5}$, respectively.

As for the software constraints, we introduce another form to record those installation restrictions. We denote software constraint using the syntax $CSTR_{s} = \{(s, \varepsilon)\}$, where $s \in \{0,1\}, \varepsilon \subset \{c_{i}, c_{j}, \cdots, c_{k}$\}. Here, $s$ is the indicator of co-dependent colors and $\varepsilon$ is a set of colors. When $s$ equals to 0, it states that the set of colors in set $\varepsilon$ must not be assigned together to any vertex. In contrast, 1 means that colors in $\varepsilon$ must be allocated simultaneously in order to perform certain functionality. It is important to introduce the concept of software constraints here, given we need to avoid allocating software with redundant functionality to the same host. To illustrate this constraint representation, assume for an assignment characterized by $CSTR_{s} = \{(0, \{c_{1}, c_{2}, c_{4}\}), (1, \{c_{1}, c_{3}\})\}$ on which $c_{1}, c_{2}, c_{3}$ represents three distinct pieces of software with equivalent functionality, we must avoid coloring the single vertex with any combination of $c_{1}, c_{2}$ and $c_{4}$, (i.e., assigning them separately) while colors $c_{1}$ and $c_{3}$ must be assigned together to a vertex.


\subsection{Software Assignment Algorithms}

%\renewcommand{\algorithmicrequire}{\textbf{Input:}}
%\renewcommand{\algorithmicensure}{\textbf{Output:}}
%\renewcommand{\algorithmicendprocedure}{\textbf{Return}}

In this section, we present our algorithm of software assignment, which will produce an assignment of colors to vertices in a graph, subject to a set of constraints as defined in the previous section. The algorithm consists of two phases - a labeling phase and a coloring phase.

In the labeling phase, each vertex in the graph $\{v_{1}, v_{2}, \cdots, v_{n}\}$ is assigned a unique number from $1$ to $n$, where $n$ is the total number of vertices. Next, based on the numbers assigned, we order the graph so that the vertices with smaller labels marked are listed first. Several ordering heuristics are available to help accomplish the task of labeling: random ordering, increasing degree ordering, and decreasing degree ordering. In this paper, we choose random ordering as the basis of our labeling (the effects of ordering will be evaluated in Section 4).

Following the labeling stage, the second phase of our algorithm colors all the vertices in the ordered list sequentially. To make our work more applicable to the real world scenarios, we consider the case that each host/vertex $v_{i}$ requires to install $w_{i}$ different software/colors ( $w_{i} \geq 0$ ). In this way, a requirement vector $W=[w_{1}, w_{2}, \cdots, w_{n}]$ is formed, and assigning a weight $w_{i}$ to each vertex $v_{i}$ indicates that $w_{i}$ colors are needed for the specific vertex. Based on the requirement vector, we introduce a set UNCOLOR as containing all the vertices that have not fulfilled their predefined requirements, including vertices that have not been colored at all yet and those that are incompletely colored, i.e., its color number $< w_{i}$. Each iteration during the coloring phase consists of one pass through all the uncolored vertices by color values sequentially and finally terminates when all vertices have been colored or there is no feasible colors available for the current vertex.  A feasible coloring solution not only achieves all the requirements, but also assigns diversified colors to those connected vertices.

%
%\begin{algorithm}\tiny[!t]
%\setlength{\abovedisplayskip}{1.5cm}
%\setlength{\belowdisplayskip}{2cm}
%  \begin{algorithmic}[1]
%
%  %\small
%  \caption{Color Assignment Algorithm I}
%  \label{alg:Framwork}
%  %  \Require
%      (1) adjacency matrix of graph $G=(V, E)$, where $G[i][j] = 1$ if $(i,j) ∈ E$ and $G[i][j] = 0$ if $(i,j) ∉ E$;
%      (2) Available colors are ordered and represented by integers $1, 2, \cdots, k$;
%      (3) ordering $ω$ of vertices in $V$;
%      (4) constraint matrix $CSTR_{h}$ and constraint set $CSTR_{s}$;
%      (5) $W = (w_{1}, w_{2}, \cdots, w_{n})$.
%  %  \Ensure
%      A color assignment of $k$ colors, $1$ through $k$, to vertices of $G$ represented by an array. 
%     \textbf{MAIN}
%    \State initialize array $X$
%    \State UNCOLOR $\leftarrow \emptyset$
%    \For{$l \leftarrow 1$ to \To $n-1$}
%\State pick unlabelled $v \in V$ at random
%\State $label(v) \leftarrow l$
%\EndFor
%    \State UNCOLOR $\leftarrow V$
%    \State UNCOLOR $\leftarrow$ ApplyConstraint($G, CSTR_{h}$)
%    \For{$i$ from the smallest to \To largest integer (color)}
%    \For{each vertex $e$ in UNCOLOR\To }
%		\State ColorVertexI($G, CSTR_{s}, i, e$)
%	\EndFor
%	\EndFor
%%\State
%\Procedure {ApplyConstraint}{$G, CSTR_{h}$}
%	\For{for each vertex $j$ related by constraint $c_{h}(j) \in CSTR_{h}$ \To }
%		\State color$(X[j]) \leftarrow c_{h}(j).\varepsilon$
%		\State update UNCOLOR and $w_{i}$
%		\If{($!w_{e}$)}
%			\State UNCOLOR $\leftarrow$ UNCOLOR - $\{j\}$
%		\EndIf
%	\EndFor
%\State    \textbf{return}(UNCOLOR)
%\EndProcedure
%%\State
%\Procedure{ColorVertexI}{$G, CSTR_{s},i,e$}
%\For{each vertex $r \in N_{G}(e)$}
%\State check $H_{r}$ and $c_{s} \in \{CSTR_{s}|i \in C_{s}.\varepsilon\}$
%\State color$(X[e]) \leftarrow i$
%\State update $w_{e}$
%\If{$!w_{e}$}
%\State UNCOLOR $\leftarrow$ UNCOLOR - $\{e\}$
%\EndIf
%\EndFor
%\EndProcedure
%  \end{algorithmic}
%\end{algorithm}


Once the algorithm has been initialized, the main process begins by randomly assigning every vertex a distinct number as its label. It then successively assigns a set of available colors to the ordered vertices. Through an iterative process, when a current color is determined, the algorithm continues by scanning the vertices in UNCOLOR and checking to see if any of them can be colored according to the given constraints. ColorVertexI is the procedure that checks the constraint matrix to ensure that all the pre-defined constraints are satisfied by assigning the colors, as well as the software constraints. A vertex will be assigned the current color in procedure ColorVertexI and the weight corresponding to it will be decreased by 1. Those vertices with weight equals to 0 will be removed from UNCOLOR. If the vertex violates any constraints or its weight is larger than 0, then this vertex will remain in UNCOLOR for the next iteration. The process stops when either the UNCOLOR set or the feasible color set is empty. 


Although ColorVertexI limits the number of defective edges (which is 0) to the minimum extent, there is a high probability that not all nodes get colored due to the rigorous coloring constraints of the algorithm. To further color those remaining nodes, we propose ColorVertexII. As a supplement to ColorVertexI, ColorVertexII releases some of the hard constraints by allowing certain adjacent nodes to share the same color. However, with the overall goal of increasing the survivability of networked system, such release should still follow certain principles. To be specific, instead of targeting at the reduction of the number of defective edges, ColorVertexII shifts its focus to minimizing the size of the maximal connected components.

\begin{figure} [h]
  %\centering
  \setlength{\abovecaptionskip}{-0.08cm}
	\setlength{\belowcaptionskip}{-0.3cm}
  \begin{center}
  \includegraphics[width=2in]{images/compare1.png}
  \hspace{0in}
  \includegraphics[width=2in]{images/compare2.png}
  \caption{Two random graphs with different software assignments}
  \label{fig.compare}
  \end{center}
\end{figure}

For better understanding, we use an example to illustrate our point. Given two graphs in Fig. 2, the left graph has 4 defective edges  (\{$v_1, v_5$\}, \{$v_2, v_7$\}, \{$v_3, v_6$\}, \{$v_4, v_8$\})but none of them share a common node; the other contains 3 defective edges (\{$v_1, v_4$\}, \{$v_2, v_4$\}, \{$v_4, v_8$\}) but all share a common node of $v_{4}$, the maximal infected number in the left graph is 2 (assume the attack starts from one node), while in the other graph, this number is 4. If an attack takes place on both graphs, the potential damage in the left graph is smaller than in the right graph even though the left graph has a larger number of defective edges.


%\begin{algorithm}
%  \begin{algorithmic}[1]
%  \small
%  \caption{ Color Assignment Algorithm II}
%  \label{alg:Framwork}
%
%\Procedure {ColorVertexII}{$G, CSTR_{h}, e$}
%\State size = $\infty$
%\For{$i$ from smallest to largest integer (colors) \To}
%\State $temp\_size = BFS(G,e,i)$
%\If{$size < temp\_size$}
%\State $size = temp\_size$
%\State $icolor = i$
%\EndIf
%\EndFor
%\State color$(X[e]) \leftarrow icolor$
%\State update $w_{e}$
%\If{($!w_{e}$)}
%\State UNCOLOR $\leftarrow$ UNCOLOR - $\{e\}$
%\EndIf
%\EndProcedure
%  \end{algorithmic}
%\end{algorithm}


After ColorVertexI process, if UNCOLOR set is empty, it returns a perfect coloring solution with maximum number of connected component (i.e., each vertex is a connected component). Otherwise, ColorVertexII is called after ColorVertexI finishes its coloring process. It tries every color one by one in the UNCOLOR set and choose one with the least penalties. Since penalty occurs when defective edges appear, least penalties means that with the color assigned to the particular vertex, it forms up a connected component of smallest size compared to all other colors. This process repeats until UNCOLOR set is empty. 

To include ColorVertexII, we need to also append following algorithm fragment at the end of the MAIN function.

%\begin{algorithmic}[1]
%\small
%\Loop
%\If{(UNCOLOR == $\emptyset)$}  exit
%	\For{each vertex $e$ in UNCOLOR}
%	\State ColorVertexII($G,CSTR_{s},e$)
%	\EndFor
%
%\EndIf
%\EndLoop
%\end{algorithmic}

\vspace{2pt}
\noindent \textit{Time Complexity Analysis.} Initially, there are $n$ nodes available for coloring.  In ColorVertexI, as all available colors ($k$) have to be tentatively assigned to every node, so there are $n*k$ checks in this step.  Suppose there are $n$ remaining uncolored nodes in the worst case, each node with $w_{i}$ colors needs to be assigned, in ColorVertexII we need $n*w_{i}*k$  rounds to pick the optimal color for each of them. To satisfy the algorithm constraints, after each color allocation, ColorVertexII also needs to check the size of the connected components containing the current node in order to find the optimal color assignment with the minimal value.  This makes the time complexity of ColorVertexII $O(n*k+n*w_{avg}*n_{i}*k)$, where $w_{avg}$ is the average number of weights for all of the nodes in the network,  and $O(n^{2}*k)$ in the worst case (the expanding vertex set size for each color in ColorVertexII is as large as the entire graph and $n_{i}$ becomes $n$).



\subsection{Further Survivability in a Dynamic Environment} 

Although software diversity is effective in blocking malicious attacks by increasing the attacking complexity, given the relatively static environment (for example, software stacks, configurations, and various system parameters remain relatively static over relatively long periods of time), an attacker will eventually succeed with sufficient time and effort. Thus, a more advanced mechanism is needed to survive the challenges of long-lasting (persistent) attacks.

One intuitive method to resolve this problem is to build the networked system into a temporal dynamic model by adopting the technique of software shuffling. By periodically re-allocating software according to some predefined constraints and requirements, the infrastructure can continually change to confuse the attacker and thus delay the attack. Even if an attacker has collected enough information about the system and successfully launched an attack once, the success could hardly be replicated due to the frequently altering settings. More specifically, different software assignment solution can be generated by simply changing the ordering of vertices to-be-colored at random. In that way, by permuting the order of the vertices, our algorithm would output multiple assignment solutions with fair quality and could thus provides diverse assignment solutions for shuffling. The shuffling trigging mechanism can be either time-based or event-based. Previous studies have showed that the re-diversification provides actual benefits against certain attacks, such as probing and incremental attacks \cite{nguyen2010effectiveness}, and is effective against automated attacks \cite{yang2008improving}. 


One practical question might be brought up: \textit{how to deploy the dynamic strategy in practice without (or with minimal) interrupting routine system operations?} To realize efficient software shuffling without (or with minimal) system operation interruption, we would suggest to draw support from the recent virtualization technology. For example, virtual appliance~\cite{sapuntzakis2003virtual} can be used to simplify software deployment and management. We may deploy and manage multiple software within a virtual appliance as a unit, and distribute this unit to hosts in the network. Note that user's personal settings and data are preserved separately in a dedicated server, which allows them to synchronize across ``moves" and updates of the appliances. 

Given the complexity of this problem, we decide to cover them in our future works. Here we just provide some general guidelines identifying the possible  principles and directions for our future studies. Preliminary experiments on the feasibility of software shuffling will be performed in the next section. 



\section{Evaluations}
In this section, we present simulation results for evaluating the performance of our method with respect to the above-mentioned requirements.

\subsection{Simulation Setup}
Recall that our model is built on top of undirected graph $G$ as the abstraction of networked systems (Section 2.1). To fully investigate the performance of our algorithm in arbitrary systems, we use three representative topologies to characterize the behaviors of real world systems. Specifically, we consider three types of graphs with different degree distributions: random graph, regular graph and power-law graph. A regular graph does not have high connectivity in many cases and thus long and circuitous route is required to reach other nodes. Typical examples of regular graphs includes lattice and ring lattice graph, which can be used to characterize the behavior of each vertex that depends upon the behavior of its nearest neighbors. In a random graph, each pair of vertices are connected with the same probability and the degrees of each vertex are distributed according to a Poisson distribution. In a power-law graph, the degree distribution satisfies a power law. It contains highly connected nodes (hubs) and is a good model for the highly connected systems. We believe these three types of graphs provide a reasonable coverage range of realistic networked systems. 

In this simulation, we generated $k$-regular graph using \textit{Graph-Maker-0.02} package in Perl with the default degree set to be 4. Random graphs and power-law graphs were generated using \textit{igraph} package in R, each with about the same average degree like the regular graphs. Given approximately the same degree settings across all three tested graphs, we can observe if network connectivity affects the defense capability of our method. A summary of graphs used in our simulations are listed in Table 2. All experiments were performed on graphs with default size of 500 unless otherwise specified. In our opinion, this size is adequate to illustrate the results of our study since our proposed mechanism only target on networked system with central authority. All simulation results here are presented as the average of 50 trials. 


\begin{table}[!ht] \small
%\setlength{\abovecaptionskip}{-0.2cm}
	%\setlength{\belowcaptionskip}{-0.01cm}
\renewcommand{\arraystretch}{1.3}
\caption{Graphs used in the simulations.}   
\label{table2}
\centering
%\begin{center}
\begin{tabular} {|l|l|l|l|}
\hline
         Graph Type & $|$V$|$ = $?$ & Parameter & Average degree $\mu$   \\   \hline
         Regular & 500 & 4 & 4 \\
         Random & 500 & p = 0.008 & 4.075 \\
         Power-law & 500 & m = 2 &  5.9\\
    \hline
%\end{tabular} 
%\end{center}
\end{tabular}
\end{table}

\vspace{-3pt}
To evaluate our approach under various settings, we ran the simulations with different combinations of the number of colors and weights, which is denoted as $\#color$ and $\#weight$, respectively. The total $\#color$ in the ``color pool", represents the unique number of software choices available for hosts to choose from, and $\#weight$ represents the average number of distinct software finally installed on each host. Another parameter $r$ that we defined in our evaluation, as $\#weight/\#color$. We set $\#color$ to 20 considering the average number of software usually installed on a system in real world. In order to see how the number of average weight impacts the performance metrics, $\#weight$ was set to be an integer value ranging from 1 to 15. Intuitively, the larger $r$, the less heterogeneity of the system. We avoided choosing values larger than 15 because when it exceeds 15, the whole system becomes almost homogeneous.

\subsection{Simulation Metrics}

We adopted two metrics while evaluating the performance of our approach, including 1) the maximal number of nodes that can be possibly compromised; 2) the average size of isolated connected components. For the first metrics, we define $S$ as the number of nodes contained in the largest connected component, denoting the number of machines compromised under the worst-case attacks. Nodes within the maximal connected components are always attackers' first choice to penetrate into the network. In addition to the largest size, we also considered the average size of all connected components, which we think indicates the overall robustness of the system. Although solely depending on the average size of connected components may not directly indicate the survivability of the system, when taking it into considerations together with $S$, it can somehow be used to present separate infections causing by an attack. We use a symbol $\overline{s}$ to denote the average size of connected components in the system. 

\subsection{Simulation Results}

%\vspace{5pt}
\noindent \textbf{The Property of \textbf{$r$}.} In order to better understand the performance of our proposed algorithm, we used power-law graph as an example to show how $\#weight$ and $\#color$ could impact the heterogeneity of a system. Figure \ref{fig_weight&color} shows the variation of maximal connected component $S$ in responding to the changes of $\#weight$ and $\#color$ in a power-law graph. 

%\begin{figure}[!t]
%\centering
%\includegraphics[width=3.5in]{images/weight&color_new_small.png}
%\caption{$S$ as a function of $\#$\textit{weight} and $\#$\textit{color}}
%\label{fig_weight&color}
%\end{figure}



%\begin{wrapfigure}{l}{6.5cm}
%\setlength{\abovecaptionskip}{-0.1cm}
%	\includegraphics[width=6.5cm]{images/weight&color_new_small.png}
%	\caption{$S$ as a function of $\#$\textit{weight} and $\#$\textit{color}}
%	\label{fig_weight&color}
%\end{wrapfigure}

We observed that given the same $\#color$, when $\#weight$ increases, the size of the largest connected component becomes greater. In addition to the general distribution, we also noticed that all three lines with different $\#color$ generated relatively flat trends at the beginning and followed by a sharp increase when $r$ reaches around 0.4. Before this critical point, $S$ remains relatively small (i.e., when $r < 0.4$, $S$ is less than 0.1). However, when $r$ is greater than 0.4 (i.e., $\{\#weight$ = 4, $\#color$ = 10, $\#weight/\#color = 0.4\}$, $\{\#weight$ = 6, $\#color$ = 15, $\#weight/\#color = 0.4\}$, $\{\#weight$ = 8, $\#color$ = 20, $\#weight/\#color = 0.4\}$), $S$ begins rapidly approach to 1. 

The simulation results confirm that how much heterogeneity our algorithm can create actually depends on the value of $r$. In general greater $r$ tends to generate larger maximal connected component, which in turn indicating a more homogeneous system. For instance, when $\#weight$ equals to $\#color$ ($r$ = 1), there is only one component in the graph (the worst case).  The results also suggest very limited decrease of the maximal connected component size when $r$ is less than 0.4. So for systems already with an $r$ value less than 0.4, there is no need for them to either increase the $\#color$ by purchasing more software or decrease the $\#weight$ by adding more hosts into the networked architecture.

\begin{figure}[!t]
	%\centering
	\setlength{\abovecaptionskip}{-0.1cm}
	%\setlength{\belowcaptionskip}{-0.2cm}
	\begin{minipage}[t]{0.47\linewidth}
		%\centering
		\includegraphics[width=2.3in, height = 1.6in]{images/weight&color_new_small.png}
		\caption{$S$ as a function of $\#$\textit{weight} and $\#$\textit{color}.}
		\label{fig_weight&color}
	\end{minipage}%
	\hfill \begin{minipage}[t]{0.47\linewidth}
		%\centering
		\includegraphics[width=2.53in, height = 1.65in]{images/comparetopology_small.png}
		\caption{Performance on three representative graphs.}
		\label{fig_comparettt}
	\end{minipage}
\end{figure}

\vspace{5pt}
\noindent \textbf{Performance on Different Topologies.} Noticing the impact of $r$ on the system heterogeneity, we conducted further simulation experiments to see if the different topologies could also affect the size of maximal connected component $(S)$ and the average size of isolated connected components $(\overline{s})$. As can be seen from Figure \ref{fig_comparettt}, although the same ``flat to sloping" trends were also observed in all three different topologies, we found that the maximal connected component size of a power-law graph actually starts increasing at a relatively low $r$ value (0.4) as compare to a random graph (0.55), whereas the maximal connected component size of a regular graph starts to increase at a relatively higher $r$ value (0.6). We think this difference in ``turning points" is due to the distinct connectivity of each kind of topology. Given the existence of high connectivity nodes in a power-law network, large maximal connected component tends to be more easily formed than the other two types of graphs. In contrast, as characterized by low connectivity and relatively high cliquishness \cite{premo2012local}, a regular graph only generates large maximal connected component with larger $r$. 

%\begin{figure}[!t]
%	\setlength{\abovecaptionskip}{-0.1cm}
%	\begin{minipage}[t]{0.47\linewidth}
%		\includegraphics[width=2.3in, height = 1.6in]{images/weight&color_new_small.png}
%		\caption{$S$ as a function of $\#$\textit{weight} and $\#$\textit{color}.}
%		\label{fig_weight&color}
%	\end{minipage}
%	\hfill \begin{minipage}[t]{0.47\linewidth}
%		\includegraphics[width=2.53in, height = 1.65in]{images/hub_small.png}
%		\caption{Manipulating $\#weight$ on hightly connected nodes}
%		\label{fig_hub}
%	\end{minipage}
%\end{figure}



In addition to the measurement on the size of maximal connected component, we also monitored the average size of all connected components $\overline{s}$. Unlike our observations before, we found that the distribution $\overline{s}$ along the ratio of $\#weight$ and $\#color$ increases gently this time. Although there also appeared to be some turning points in all three distributions, even if $r$ exceeds the threshold $\overline{s}$ maintains relatively small values (when $r$ reaches 0.65, $\overline{s}$  of three graphs is still smaller than 0.1) as compared to the values of the maximal connected component size. The explanation of this phenomenon is that, when maximal connected component formed (after $r$ exceeds the threshold), the size of connected components becomes polarized—other than the maximal ones, the rest connected components with sizes between 1 to 20. 

We think the above results to be useful as they suggested that organizations with limited budget or software availability could still enhance their system heterogeneity by changing the underlying topology. 


%\begin{figure}[!t]
%\centering
%\includegraphics[width=3in]{images/comparetopology_small.png}
%\caption{Performance on three representative graphs.}
%\label{fig_comparettt}
%\end{figure}

\begin{figure}[!t]
	%\centering
	\setlength{\abovecaptionskip}{-0.1cm}
	%\setlength{\belowcaptionskip}{-0.2cm}
	\begin{minipage}[t]{0.47\linewidth}
		%\centering
		\includegraphics[width=2.4in, height = 1.7in]{images/hub_small.png}
		\caption{Manipulating $\#weight$ on hightly connected nodes}
		\label{fig_hub}
	\end{minipage}%
	\hfill \begin{minipage}[t]{0.47\linewidth}
		%\centering
		\includegraphics[width=2.5in, height = 1.7in]{images/comparealgorithm_small.png}
		\caption{Comparison with other algorithms.}
		\label{fig_compareaaaaa}
	\end{minipage}
\end{figure}

\vspace{5pt}


%\begin{figure}[!t]
%\centering
%\includegraphics[width=3in]{images/hub_small.png}
%\caption{Manipulating $\#weight$ on hightly connected nodes}
%\label{fig_hub}
%\end{figure}

\noindent \textbf{Effect of Highly Connected Nodes.} After investigating the impact of different ratios of $\#weight$ and $\#color$, as well as the topologies, we also performed more advanced analysis on highly connected nodes, trying to better understand their effects in generating heterogeneous system structures. We targeted at hub nodes given their key roles in a graph, as attacking only a few of them could devastate the entire system, whereas adjustment made to some of them could lead to a totally different software assignment and in turn significantly reduce the epidemic spread. So, to explore how changes on hub nodes could influence the size of maximal connected component $(S)$, we here introduced two parameters $f$ and $d$, where $f$ is designated as the top $n$ percentage of nodes with the highest degrees and $d$ is denoted as the $\#weight$ removed from those top $n$ nodes. We set the value of $f$ equals to $1\%$ and $2.5\%$, which means we pick top 5 and 12 nodes, respectively, and we observe $S$ as a function of $r$ for $d = 3$ and $d = 5$. The number of colors $\#color$ in this case is 20. Different combinations of $f$ and d are tested and results are plotted in Figure \ref{fig_hub}.

As can be seen from the plot, $S$ can be significantly reduced by decreasing the weight on a portion of highly connected nodes. In addition, we also found that if there are certain constraints on the minimum weight of hub nodes, one could also reduce $S$ by picking more hub nodes and cutting the maximum allowed weight on each of those hub nodes. In other words, it means that in reality, organizations can either reduce their vulnerabilities by installing less software on top connected servers, or if they have to install certain number of software on each of those hub servers then they could pick more servers and to reduce less software on each of them.


\vspace{4pt}

%\begin{figure}[!ht]
%\centering
%\includegraphics[width=3in]{images/comparealgorithm_small.png}
%\caption{Comparison of  our algorithm with others.}
%\label{fig_compareaaaaa}
%\end{figure}

\noindent \textbf{Comparison with Other Algorithms.} After the above investigations, we next compared our algorithm with two other related methods, randomized coloring and color flipping, as proposed in one previous study \cite{o2004achieving}. In randomized coloring, each node picks a tentative color uniformly at random from the color pool, whereas color flipping extends the random coloring by allowing each node performs a local search amongst its immediate neighbors to switch colors to decrease the number of locally defective edges. Besides, we also compared our algorithm with the most optimal solutions based on brute force search. We compared these algorithms in terms of the maximal connected component size ($S$) achieved the ratio of $\#weight$ and$\#color$ ($r$). Results were plotted in Figure \ref{fig_compareaaaaa}.

As we can see from Figure \ref{fig_compareaaaaa}, our algorithm outperformed both randomized coloring and color flipping methods by creating smaller connected components given the same $r$. Initially, $S$ in our algorithm increased more slowly as compared with the other two methods. Only when $r$ increases to 0.75, S in our algorithm approximately equals to that of the randomized coloring and color flipping approaches, indicating that our algorithm losses its advantage and its performance is almost the same as previous algorithms when $r$ gets closer to 1. The results also show that the performance of our algorithm is very close to the optimal one. 
 
\vspace{5pt}


\begin{figure}[!t]
	%\centering
	\setlength{\abovecaptionskip}{-0.1cm}
	%\setlength{\belowcaptionskip}{-0.2cm}
	\begin{minipage}[t]{0.47\linewidth}
		%\centering
		\includegraphics[width=2.4in, height = 1.7in]{images/color_distribution_small.png}
		\caption{Chaotic unpredictability of the shuffling strategy}
		\label{fig_colordistribution}
	\end{minipage}%
	\hfill \begin{minipage}[t]{0.47\linewidth}
		%\centering
		\includegraphics[width=2.5in, height = 1.7in]{images/movingquality_small.png}
		\caption{The quality of shuffling}
		\label{fig_moving}
	\end{minipage}
\end{figure}

\noindent \textbf{The Feasibility of Shuffling.} To validate the feasibility of the shuffling processes, we proposed evaluations on its performances from two aspects: 1) the chaotic unpredictability of the shuffling strategy; 2) the differences in the maximal and the average size of connected components after consecutive shufflings. In this section, we performed simulations on a power-law graph with $\#weight$ set to 10 and $\#color$ to 20.

First, results on colors of each generated maximal connected component are plotted in Figure \ref{fig_colordistribution}. As can be observed in this figure, the dominant colors in the largest connected components after each shuffling are evenly distributed. This validated the random nature of our shuffling strategy, which allows all colors to be assigned to nodes with equal probability. In other sense, there is no way for an attack to predict the prevailing color in the maximal connected component, so the attacker has to try out every color with equal probability in order to compromise the network to the largest extent. 

The other metric that we measured is the maximal ($S$) and the average size of connected components ($\overline{s}$) after each shuffling. By consecutively running our color assignment algorithm for 10??? times, we plotted the variations in $S$ and $\overline{s}$ in Figure\ref{fig_moving}, with the upper line denoting $S$ while the lower one as $\overline{s}$. In the figure, both $S$ and $\overline{s}$ stay relatively stable across 10??? shuffles. This suggested that our shuffling strategy is independent of the initial state and generates fairly stable color assignments with approximately the same attacking complexity.

The simulation results indicate that, the shuffling strategy can not only extend our algorithm by generating a large number ($|N|!$) of different random solutions, but also can it provide fairly stable performance without affecting the overall attacking complexity. With all these, we could claim the shuffling method can be used to improve survivability of the networked system with more colors assigning options, as well as unpredictability of the system.


%\begin{figure}[!ht]
%\centering
%\includegraphics[width=3in]{images/time_small.png}
%\caption{Time overhead (in seconds)}
%\label{fig_timenew}
%\end{figure}



\begin{wrapfigure}{l}{6.5cm}
\setlength{\abovecaptionskip}{-0.1cm}
	\includegraphics[width=6.5cm]{images/time_small.png}
	\caption{Time overhead (in seconds)}
	\label{fig_timenew}
\end{wrapfigure}

\noindent \textbf{Scalability and Computational Overhead.} Finally, we repeated the experiment on graphs of larger size and measured the computation overhead introduced by our algorithm. Figure \ref{fig_timenew} shows the average time required by this assignment algorithm when increasing system size, suggesting our algorithm can be applied to large systems that scale up thousands of nodes with acceptable time overhead. As showed in Figure \ref{fig_timenew}, in a commodity PC, it takes about 10 minutes to assign color to 10 thousand nodes. The simulation result confirms the practicability of our algorithm. 



\section{Conclusions}

In this work, we proposed a method for effectively containing automated attacks via software diversity. By building up a heterogeneous networked system, our defense mechanism increases the complexity of the networked system by utilizing off-the-shelf diverse software. The observation that diverse software is vulnerable to different attack. Given the practical problems of software assignment, we presented a software assignment algorithm based on graph coloring with real world constraints and system prerequisites taken into account. We not only analyzed the effectiveness of our methodology, but also noticed the weakness of adopting the diversity philosophy alone and then integrated it with the software shuffling for further survivability.   



\bibliographystyle{splncs}
\bibliography{lncsbib,mybibfile}

%
\end{document}
