\documentclass[review]{elsarticle}

\usepackage{lineno,hyperref,xfrac}
\usepackage{graphicx}
\usepackage{epstopdf}
\usepackage{subfigure}
\usepackage{array}
\usepackage{float}
\usepackage{color}
\usepackage{amsmath}
\usepackage{algorithm}
\usepackage{algorithmicx}
\usepackage{algpseudocode}
\modulolinenumbers[5]
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\newcommand{\q}[1]{``#1"}
\newcommand{\highlight}[1]{\textcolor{blue}{#1}}
\journal{Journal of Systems and Software Templates}
\bibliographystyle{elsarticle-num}
\begin{document}
\begin{frontmatter}
	\title{Localizing Multiple Software Faults based on Evolution Algorithm}
		
	\author[mymainaddress]{Yan Zheng}
	\author[mymainaddress]{Zan Wang\corref{mycorrespondingauthor}}
	\cortext[mycorrespondingauthor]{Corresponding author}
	\ead{wangzan@tju.edu.cn}
	\author[mymainaddress]{Xiangyu Fan}
	
	
	\author[mysecondaryaddress]{Xiang Chen}
	\address[mymainaddress]{School of Computer Software, Tianjin University, China}
	\address[mysecondaryaddress]{School of Computer Science and Technology, Nantong University, China}
	
	\author[mythirdaddress]{Zijiang Yang}
	\address[mythirdaddress]{Department of Computer Science, Western Michigan University, USA}
	
\begin{abstract}
\highlight{During software debugging, a significant amount of effort is required for programmers to identify the root cause of manifested failures. Significant number of spectrum-based fault localization techniques have been proposed to automate the procedure.} However, most of the existing fault localization approaches do not consider the fact that programs tend to have multiple faults. Considering faults in isolation results in less accurate analysis. \highlight{In this paper, we propose a  flexible framework called FSMFL for localizing multiple faults simultaneously based on genetic algorithms. FSMFL can be easily extended by different fitness functions for the purpose of  localizing multiple faults. We have implemented a prototype and conducted extensive experiments to compare FSMFL against existing spectrum based fault localization approaches. The experimental results show that FSMFL is competitive in single-fault localization, and superior in multi-fault localization.}
\end{abstract}
		
\begin{keyword}
multi-fault localization \sep program spectrum \sep genetic algorithm \sep search based software engineering
\end{keyword}	
\end{frontmatter}

\section{Introduction}
\highlight{Testing and debugging are considered the most expensive phase in the entire software development cycle \cite{Beizer:1990}. One of the main reasons for such high cost is that fault localization, the process of tracing propagation of faults and identifying the location of erroneous program statements, is labor intensive and time consuming.}
	
To reduce the cost, many fault localization techniques that automate, or partially automate, the fault localization procedure have been proposed in the past decade. Among them, \highlight{Spectrum-based Fault Localization (SFL) methods \cite{Wong2016,Jones,Abreu,Yoo2012,Dallmeier2005,Chen,Naish2011}} have been shown to be effective and efficient. These methods require a comprehensive test suite to provide sufficiently many passing and failing executions. In this context, fault localization is conducted by comparing and contrasting these passing and failing executions, and then assigning suspicious scores to executed statements. The output of SFL methods is a list of statements ranked by their suspicious scores in descending order. 
	
\highlight{Numerous SFL techniques with a variety of suspicious score computation functions have been proposed. All of these techniques have the common goal to assign a suspicious score as high as possible to a buggy statement, and as low as possible to a correct one. According to the study by Xie et al. \cite{Xie2016}, fault localization techniques tend to offer no help to debugging small programs. This is understandable as examining the ranked statements may take as much effort as examining the source code in a small program. Therefore, fault localization techniques are likely used for debugging large programs, where multiple errors typically exist. From another perspective, according to investigation performed by X. Xia and L. Bao \cite{Xia2016ICSME}, professional developers can use and benefit from spectra-based fault localization techniques. The experimental result also reveals that both the accurate and mediocre spectra-based fault localization tools can help professional developers to save their debugging time, and the improvements are statistically significant and substantial. Unfortunately, most SFL methods are optimized for single fault and thus the ranked statements are less accurate when there exist multiple faults. These methods assign the statements that are executed by more failed test cases with higher suspicious scores, no matter whether they are faulty or not. For some faulty statements, the SFL methods may assign them low suspicious scores because not many failed test cases cover them. Consequently, when there are multiple faults, some of the faulty statements may be left behind by SFL methods.}

To investigate this issue, the relationship between faults and the influence of multiple faults on fault localization have been examined \cite{Debroy,DiGiuseppe2011,Xue}. The studies imply that different faults may interfere with each other and significantly decrease localization accuracy. Based on the findings, several approaches for localizing multiple faults, or multi-fault localization (in contrast to traditional single-fault localization), have been proposed \cite{Wong2016,Jones,Abreu,Yoo2012,Dallmeier2005,Chen,Naish2011}. However, these multi-fault localization approaches have several limitations that prevent their adoption in real applications. Some of them are based on linear programming, thus not general as not all suspicious functions can be converted to linear models. Others adopt unnecessary complex algorithms that are not scalable.
	
\highlight{In this paper, we propose FSMFL, a Fast Software Multi-Fault Localization Framework based on Genetic Algorithms. The innovation of this approach is that we transform the multi-fault localization problem to a search problem where program statements are encoded as a chromosome to indicate whether a statement is faulty. For example, \q{0100100000} represents ten lines of code where the second and fifth statements are faulty. Each chromosome is a candidate solution that is evaluated by a fitness function. A fitness function determines how good a candidate solution explains the failed and passed test cases that cover its statements. Then, a genetic algorithm generates a population that consists of a set of binary chromosomes. The genetic operators, such as selection, crossover, mutation, accepting and replacement, are employed to evolve the population until the algorithm reaches a predefined threshold. Finally, statements are ranked according to the last candidate population.}
	
\highlight{We have implemented a prototype of FSMFL and evaluated it on large number of faulty programs. Our benchmark consists of Siemens programs, three Linux programs, space program and five Defects4j programs \cite{defects4j,defect4jtool} . Our experiments indicate FSMFL outperforms the state-of-the-art single- and multi-fault localization approaches.}
	
In summary, this paper makes the following contributions. 
\begin{itemize}
	\item[1.] \highlight{To the best of our knowledge, we first propose a novel approach that transforms multi-fault localization problem to a search problem so genetic algorithm can be exploited. A framework called FSMFL has been implemented. FSMFL is a flexible framework that accepts different population initialization strategies, fitness functions and termination criteria.}
	\item[2.] \highlight{We design a new fitness function for the purpose of multi-fault localization. The fitness function gives a candidate a higher fitness value if it covers more failed test cases and less passed test cases. Our experiments confirm that the fitness function is effective when handling both single and multi-fault programs. }
	\item[3.] We perform various optimizations, including memory usage optimization, parallel computation and reduce time complexity, to improve the performance of FSMFL. Meanwhile, we have built a real large-scale benchmark that has both single-fault and multi-fault programs. \highlight{Our proposed FSMFL is evaluated against 8 other approaches on this benchmark. The evaluation shows that execution time of FSMFL is less than 23 seconds for large-scale programs (over 30000 lines of code on average). Furthermore, three statistical hypothesis test (ANOVA, LSD, Benferrion) are also conducted to verify the competitiveness of FSMFL.}
\end{itemize}
	
The remainder of the paper is organized as follows. Section 2 describes the background and the motivation of our work. The detailed multi-fault localization approach is presented in Section 3, followed by empirical study in Section 4. After giving the related work in Section 5, Section 6 concludes the paper. 	

\section{Motivation}
In this section we give the motivation by using an example. Before that we present necessary background for spectrum-based fault localization.

\subsection{Preliminaries}
Program spectrum is defined as the execution information of a program during the testing process, including coverage information, test results, and executable conditional branches. \highlight{Spectrum-based fault localization requires a set of passed and failed test cases.} The statistical information used in the traditional spectrum-based fault localization approaches is summarized in Table 1. The terms  $n_{ep}(s)$ and $n_{ef}(s)$ represent the number of passed and failed test cases that executes the program entity \textit{s}, respectively. The terms $n_{np}(s)$ and $n_{nf}(s)$ denote the number of passed and failed test cases that do not execute the program entity \textit{s}. The terms $n$, $n_p$, and $n_f$ represent the number of test cases, the number of passed test cases and the number of failed test cases, respectively. Typically a program entity is a program statement.

\begin{table}[htbp]
	\scriptsize
	\centering
	\caption{Statistical information in spectrum-based fault localization}
	\begin{tabular}{cp{2.5cm}<{\centering}p{2.5cm}<{\centering}p{2.5cm}<{\centering}}
		\hline
		& Program entity $s$ covered & Program entity $s$ not covered & Program entity summary \\
		\hline
		Passed test cases  &   $n_{ep}(s)$    &  $n_{np}(s)$     & $n_p$  \\
		Failed test cases  &  $n_{ef}(s)$     &  $n_{nf}(s)$     &  $n_f$\\
		Test cases summary    &    $n_{ep}(s)+n_{ef}(s)$   & $n_{np}(s)+n_{nf}(s)$      & $n$ \\
		\hline
	\end{tabular}%
\end{table}

Spectrum-based fault localization ranks the program statements according to their suspicious scores. Intuitively, a statement has a higher suspicious score if it is frequently executed by failed test cases and seldomly executed by passed ones. However, different approaches adopt different strategies to compute the suspicious scores. An implementation that computes suspicious scores is called a suspicious function. \highlight{Table 2 gives the formula used by suspicious functions in six spectrum-based fault localization approaches that include Tarantula \cite{Jones}, Ochiai \cite{Abreu}, GP13\cite{Yoo2012}, Ample \cite{Dallmeier2005}, Jaccard \cite{Chen}, OP2 \cite{Naish2011}. A systematic comparison on the effectiveness of different suspicious functions has been conducted in a previous  empirical study \cite{Naish2011}.}

\begin{table}[htbp]
	\scriptsize
	\centering
	\caption{Formulas used in spectrum-based fault localization}
	\begin{tabular}{cccc}
		\hline
		Name & Formula & Name & Formula \\
		\hline
		Tarantula & $\frac{  \sfrac{n_{ef}(s)}{n_f} }{\sfrac{n_{ef}(s)}{n_f} + \sfrac{n_{ep}(s)}{n_p}}$ & Ample & $\left | \frac{n_{ef}(s)}{n_f} + \frac{n_{ef}(s)}{n_p} \right |$ \\
		Ochiai    & $\frac{n_{ef}(s)}{\sqrt{n_f\times(n_{ef}(s)+n_{ep}(s))}}$ & Jaccard & $\frac{n_{ef}(s)}{n_f+n_{ep}(s)}$ \\
		GP13      & $n_{ef}\times \left ( 1+\frac{1}{2n_{ep}(s)+n_{ef}(s)} \right )$ & OP2 & $n_{ef}(s)-\sfrac{n_{ep}(s)}{(n_p+1)}$ \\
		\hline
	\end{tabular}%
\end{table}

\subsection{Motivating Example}
\highlight{The first column in Figure \ref{example_figure} gives a C program that finds the second largest variable among the four inputs. There are two faulty statements at Lines 14 and 17. 
In the next ten columns we list ten test cases $T1$ to $T10$. We use 1 and 0 to denote whether a statement is covered by a test case. In the last row we indicate whether the test case above is a passing($P$) or failing($F$) test case. }

\highlight{With the information we can easily obtain the values of $n_{ep}(s)$, $n_{np}(s)$, $n_{ef}(s)$, $n_{nf}(s)$, $n_p$ and $n_f$, thus compute the suspicious scores using existing approaches. The last four columns give the ranking according to results computed by the suspicious functions of Tarantula, Ochiai, OP2 and FSMFL. It can be observed that Line 14 is ranked 10th by the three existing approaches, a very poor result considering there are only 17 statements in the program. As for Line 17, it is ranked 4th by Tarantula and 9th by Ochiai and OP2. The example may not be very representative, but it indeed shows that under multiple faults the existing approaches may give poor results. One of the reasons is that these approaches consider each statement in isolation and do not consider the effect of the combination of multiple statements. When there is only one faulty statement, it is more likely to be covered by more failing executions and less passing ones. The passing and failing executions are diluted by multiple faults so less accurate results are obtained. For example, Lines 14 and 17 are each covered once by either, but not both, by the two failing cases $T2$ and $T3$. On the other hand, there are many statements are covered by both failing cases.}

\begin{figure}[H]
	\centering
	\includegraphics[width=1.0\textwidth]{f1.eps}
	\caption{A program with two faults and its suspicious scores}
	\label{example_figure}
\end{figure}
 	
\highlight{In this paper we propose an approach that does not consider statements in isolation. For example, if we treat Lines 14 and 17 together, One intuitive thought is to consider multiple lines as a combination, it is covered by both failing test cases and thus receives higher suspicious value. In our approach, we design a fitness function to evaluate the combinations of program entities effectively through a genetic algorithm integrated with a simulated annealing algorithm. As for this example, our approach ranks Lines 14 and 17 at 5th and 4th, respectively.}

\section{Our Approach}
In this section we present our approach on multi-fault localization after necessary definitions. 
	
\paragraph{Definition 1(Test Suite)} $T = \left \{ t_1, t_2, \ldots, t_n \right \}$ represents a test suite with $t_i$ being the $i^{th}$ test case. Moreover, We use $T_F$ and $T_P$  to denote the sets of failing and passing test cases, respectively.

\paragraph{Definition 2(Program Entities)} $E = \left \{ e_1, e_2, \ldots, e_m \right \}$ represents a set of program entities. Each entity can be a statement, a function or a class. For test case $t_i$, $t_{ij}$ gives whether the $i^{th}$ test case covers the $j^{th}$ entity .
	
\paragraph{Definition 3(Candidate Solutions)} $C = \left \{ c_1, c_2, \ldots, c_m \right \}$ represents a candidate solution that is a set of entities. The value of $c_j$ indicates whether the $j^{th}$ entity should be assumed faulty. For example, \{0,1,0,1,0\} indicate the second and fourth entities are assumed faulty while others are assumed correct. 
	
\subsection{Coverage Information Preprocess}
	
Before using the genetic algorithm, our approach have a preprocessing step to reduce the complexity of the coverage data as  large-scale programs may have a very large coverage matrix. Spectrum-based fault localization approaches depend on the execution coverage information, therefore two statements with the same coverage and two test cases with the same execution coverage paths are indistinguishable during ranking. In this case  we merge adjacent entities with same coverage to form a single program entity.
	
An entity that is never covered by any failed test case has little chance to be identified faulty by a spectrum-based fault localization approach. Since these entities can increase the search space,  we exclude these entities  before executing our genetic algorithm. After executing the genetic algorithm, these excluded entities are added to the final ranking list in the order of their calculation. More details will be given in Section 3.4.
	
\subsection{Framework Overview}
Traditional spectrum-based fault localization approaches often calculate a suspicious score for every program entity. Different from those approaches, our framework starts with potential candidates and then evaluates them by a fitness function. We assume that the entities with value 1 indicate the existence of multiple faults. Exploring all possible candidates is hard due to exponential computational complexity. To solve the problem within a limited amount of time, we propose to use genetic algorithm to search the best candidates in the whole search space.
	
\highlight{Genetic algorithm (GA) is a adaptive heuristic search algorithm \cite{Mitchell1998} based on the ideas of natural selection and genetics. It is widely used for generating solutions to optimization problems with complex search space. In GA, a population of candidate solutions to a optimization problem is evolved toward better solutions. Optimal solutions can be found after a certain number of iterations. }
	
\highlight{GA commonly has four procedures including \q{Initialization}, \q{Selection}, \q{Generation} and \q{Termination}. In the initialization procedure, an initial population of candidators are generated according to different stragies \cite{winter1996genetic}. In the selection procedure, a fitness function is used for evaluating the fitness value for each candidate. The higher the fitness value, the better the candidate. Only the candidates whose fitness values are high enough are selected for the next procedure. In the generation procedure, crossover and mutation occur and new candidates are generated, evaluated and added to the population. The selection and generation procedures form a loop until the termination procedure terminates the iterations.}
	
Based on the genetic algorithm, we propose the framework FSMFL that  consists of four components \q{population initialization}, \q{candidates evaluation}, \q{next population generation} and \q{termination criterion setting}. First, a strategy is chosen to initialize the population, and candidates in the population are evaluated by a fitness function. Then, new populations are generated through selection, crossover, mutation, accepting and replacement operators. Finally, a termination criterion is used to determine whether to evolve  the current population. Figure \ref{algorithm-framework}  shows the structure of \highlight{proposed framework}. Details of every component will be discussed in sequence in the remainder of this section.
	 	
\begin{figure}[H]
	\centering
	\includegraphics[width=.8\textwidth]{algorithm-framework.eps}
	\caption{Structure of FSMFL}
	\label{algorithm-framework}
\end{figure}	
	
\highlight{The pseudo-code of FSMFL is given in Algorithm~\ref{alg1}, which corresponds to the four aforementioned four procedures. The parameter values used in the pseudo-code will be discussed later.}

\begin{algorithm}[H]
	\caption{\highlight{Fast Software Multi-fault Localization Framework}}
	\label{alg1}
	\begin{algorithmic}[1]
		\Require population size $\alpha$, crossover rate $\gamma$, mutation rate $q$, iteration count $\delta$
		\Ensure optimal $population$ consists of best candidates 
		\State// generate feasible solutions randomly and save them into population Pop
		\State $population \gets InitializeRandomly()$ \Comment{Initialize feasible candidate}
		\State $SrotByEvaluating(population) $ \Comment{Evaluate and sort for selecting}
		\State {// initial annealing probability and decrease 0.05 after every generation}
		\State $ \varepsilon \gets 0.9 $ \Comment{Initial simulated annealing probability to 0.9}
		\For{\textit{i} = 1 to $\delta$}\Comment{Loop after iterating $\delta$ times}
		\State $new \gets nextGeneration(population,\gamma, q, \varepsilon) $  \Comment{update population}
		\If{terminable($population, new$)} \Comment{check if terminal}
		\State \Return $population$
		\EndIf
		\State $ population \gets new$ 
		\If{$\varepsilon > 0.1$} \Comment{decrease if larger than 0.1}
		\State $\varepsilon = \varepsilon - 0.05$ \Comment{decrease $\varepsilon$ every iteration}
		\EndIf
		\EndFor
		\State \Return{$population$}\Comment{returning the best candidate set after evolution}
	\end{algorithmic}
\end{algorithm}

\subsection{Population Initialization Components}
\highlight{In FSMFL, any strategy can be used to create the first generation of candidate solutions. In practice, a random strategy usually performs well~\cite{winter1996genetic}. The random strategy produces an initial population  that every program entity has equal possibility to be faulty.} That is, in the initial candidate every entity has equal possibility to be set 0 or 1. \highlight{In Figure \ref{algorithm-framework}, different strategies mean different implementations are used to store genetic sequence data. Therefore we randomly generate candidates and add them into the initial population until the number of candidates reach the predefined population size}. In the following population evolution phase the candidates are evaluated by a fitness function.
	
\subsection{Candidate Evaluation Components}
Fitness function plays an important role in candidate evaluation. Our fitness function is described as follows.
	
When there is only one fault in a program, all failed test cases should cover that faulty entity. So the faulty entity has the highest $n_{ef}$ that equals to $n_f$. A suspicious score is higher with a higher $n_{ef}$ when two program statements have the same $n_{ep}$ for the six fault localization formulas listed in Table 2. So the faulty entity can have a good ranking. However, it is not certain that faulty entities have the highest $n_{ef}$ when there are multiple faults (because some correct entities may be covered by more test cases). However, subject to that any candidate must cover all failed test cases, candidates with faulty entities have higher suspicious score than the correct entities. Considering this situation, an aggregation evaluation strategy is exploited  by our fitness function. We define some relevant notions before presenting the fitness function.
	
First, a weight value for every entity \textit{s} is defined. This is similar to Ochiai  because we also think that an entity is more likely to be faulty when it is covered by more failed test cases.
	
\begin{equation}
	weight(s)=\frac{n_{ef}(s)}{\sqrt{n_f\times\left ( n_{ef}(s)+n_{ep}(s) \right )}}
\end{equation}
	
Based on these weight values, the aggregated failure ratio $f(C)$ and the aggregated passing ratio $p(C)$ of a candidate \textit{C} can be calculated by the following formulas. \highlight{The term $weight(s)$ indicates the importance of an entity, $n_{ef}(e_i)$ represents its coverage by failed test cases and $c_i$ represents whether a candidate includes this entity. We multiply them and the result $f(C)$  indicates this candidate's weighed coverage on failed test cases. The same is with $p(C)$.}
	
\begin{equation}
	f(C)=\sum_{1\leq i\leq m}c_i\times n_{ef}(e_i)\times weight\left ( e_i \right )
\end{equation}
	
\begin{equation}
	p(C)=\sum_{1\leq i\leq m}c_i\times n_{ep}(e_i)\times weight\left ( e_i \right )
\end{equation}
	
	
Finally, the suspicious score of a candidate $C$ is defined as $S(C)$, which is proportional to $f(C)$ and inverse proportional to $p(C)$. \highlight{The numerator and denominator are added by 1 at the same time to avoid the division-by-zero exception}.
	
\begin{equation}
	S(C)=\frac{1+f(C)}{1+f(C)+p(C)}
\end{equation}
	
There exist many invalid candidates in the search space. To reduce these invalid candidates, we restrict that any candidate must be covered by all the failed test cases. That is, the following $Coverage(C)$ equals 1 for a candidate $C$. Otherwise, the fitness function will be zero. Note that our framework is capable of integrating other effective fitness functions.
	
\begin{equation}
	Coverage(C)=\frac{1}{n_f}\sum_{1\leq i\leq n,t_i\in T_F}min\left ( 1,\sum_{1\leq j\leq m}t_{ij}\times c_j \right )
\end{equation}
	
\subsection{Next Population Generation Components}
	
This component focuses on how to generate the next population in our framework. There are  5 basic steps: Selection, Crossover, Mutation, Accepting and Replacement. Algorithm \ref{alg2} gives the pseudo-code.

In the Selection step, a rank selection operator is utilized to select parents for the next crossover step, in which both a single point crossover operator and a shuffle crossover operator are applied \cite{Mitchell1998}. 
	
\highlight{Generally speaking, the crossover operator ensures that every bit of candidates has a chance to be mutated by a very low rate in the mutation step \cite{Mitchell1998}\cite{winter1996genetic}. The purpose of mutation, which introduces random modification, is to maintain diversity within the population and inhibit premature convergence. Different mutation operators can be used to achieve the goal. The most commonly used operator, i.e. single bit mutation, is used in the experiments. The mutation rate is empirically set to 0.05 in practice, which can achieve competitive performance in our experiments.}

\begin{algorithm}[H]
	\caption{\highlight{nextGeneration($population, \gamma, q, \varepsilon$)}}
	\label{alg2}
	\begin{algorithmic}[1]
		\Require current population $population$, crossover rate $\gamma$, mutation rate $q$, simulated annealing possibility $\varepsilon$
		\Ensure next generation: $nextPopulation$
		\State $count \gets 0, size \gets length(population)$
		\State $nextPopulation = \left\lbrace p\ |\ p \in population\right\rbrace $
		\State $lowerBound = minimumFitness(Population) $
		\While {$count < (2 * size)$}	\Comment{children is twice the size of parent}
		%\State // chose parents randomly
		\State （$(Parent_A, Parent_B) \gets random\_select(population) $
		%\State // create children from parents
		\State $(Child_A, Child_B) \gets crossover(Parent_A, Parent_B, \gamma) $
		\State $Child_A \gets mutate(Child_A, q) $ \Comment{Mutate child with probability $q$}
		\State $Child_B \gets mutate(Child_B, q) $
		\State $check2accept(nextPopulation, lowerBound, Child_A, Child_B, \varepsilon)$\Comment{check if accept the child}
		\State{$count = count + 1$}
		\EndWhile
		\State \Return{$nextPopulation$}\Comment{returning the best candidate set after evolution}
	\end{algorithmic}
\end{algorithm}

\highlight{In the Accepting phase, candidates are evaluated using the fitness function to decide whether they are good enough to survive in the new population. In the last replacement step, we use the newly generated population for a further run of the algorithm. Meanwhile, a probabilistic technique named \q{Simulated Annealing} (SA) \cite{khachaturyan1979statistical,Khachaturyan:a19748} is adopted for searching global optimal solution rather than premature solution. SA makes algorithm more robust when searching in a large search space and prevents the algorithm from becoming stuck at a local minimum. We use $\varepsilon$ to denote the acceptance probability of SA. The higher the value of $\varepsilon$ is, the more likely a bad solution is accepted. Through accepting bad solutions, SA keeps the diversity of population and allows for a more extensive search for the optimal solution. As for concrete implementation, $\varepsilon$ is set to 0.9 at the beginning and decreases after every iteration. Pseudo-code is given in Algorithm \ref{alg3}.}
 
\begin{algorithm}[H]
	\caption{\highlight{check2accept($population, lowerBound, Child_A, Child_B, \varepsilon$)}}
	\label{alg3}
	\begin{algorithmic}[1]
		\Require; $population$ is the current candidates set, $lowerBound$ is the threshold used for testing whether to accept the new candidates($Child_A, Child_B$), with respect to simulated annealing probability $\varepsilon$.
		\If {$fitness(Child_A) >= lowerBound$  }
		%\State // $Child_A$ is a good solution and add it to population
		\State $population \gets population \cup \left\lbrace Child_A\right\rbrace $
		\ElsIf {$random() < \varepsilon$}
		%\State // $Child_A$ is worse than any solution in population.
		
		\State $population \gets population \cup \left\lbrace Child_A\right\rbrace $
		\EndIf
		\If {$fitness(Child_B) >= lowerBound$  }
		%\State // $Child_A$ is a good solution and add it to population
		\State $population \gets population \cup \left\lbrace Child_B\right\rbrace $
		\ElsIf {$random() < \varepsilon$}
		\State $population \gets population \cup \left\lbrace Child_B\right\rbrace $
		\EndIf
	\end{algorithmic}
\end{algorithm}
	
\subsection{Termination Criterion Setting}
	
\highlight{In the last process, a termination criterion is predefined to determine whether a solution (i.e., current population) is good enough or still needs to be further evolved. In each iteration, the total fitness value of current population is compared against that in the previous generation to find out whether a better solution is found. The algorithm continues no better offspring is generated in specific number of continuous iterations. If the termination criterion is satisfied, algorithm stops and current solution is treated as the best population.}
	
\highlight{The termination criterion we use is the number of continuous iterations in which no better solution is generated. In our experiments, we set the number empirically to 50 and this  value achieves satisfactory results in our empirical study.}
	
\subsection{Multiple Faults Localization for FSMFL}
	
After the genetic algorithm component, the last surviving population contains a set of candidates. \highlight{The following process is adopted.} First, these candidates are sorted by their fitness values in the descending order. Multiple entities, which are contained in a candidate, are sorted by their values of $weight(s)$. Finally, the final ranking list is constructed by adding entities from the candidate list one by one according to its first appearance.

\section{Empirical Studies}
We design three research questions for our experiments.

\textbf{RQ1: Is FSMFL better than existing approaches ?} 

For this research question, we compare our approach with \highlight{seven single-fault localization approaches, including Tarantula \cite{Jones}, Ochiai \cite{Abreu}, OP2 \cite{Naish2011}, DStar \cite{Wong2014}, GP13 \cite{Yoo2012}, Ample \cite{Dallmeier2005} and Jaccard \cite{Chen}. A Linear multi-fault localization approach \cite{Dean} is chosen because it is also designed to evaluate the set of suspicious solutions which is similar to our candidate solution. Please refer to \cite{Dean} for specific calculation process of their method.}

\highlight{\textbf{RQ2: Does FSMFL perform significantly better than existing approaches by using statistical hypothesis test methods?}} 

\highlight{ FSMFL is designed for solving multi-fault localization problem. The first step to compare FSMFL with other baseline approaches is to verify there indeed exist significant difference between them. For this research question, three statistical hypothesis test methods, inclding ANOVA (Analysis of variance) \cite{Friedman1939}, LSD (Least Significant Difference) \cite{Wilcoxon1992} and Bonferroni Correction \cite{benjamini2001the}, are adopted to measure the difference between FSMFL and the other approaches.}

\textbf{RQ3: Is FSMFL's Efficiency acceptable ?} 

Effectiveness is a significant issue when we apply FSMFL in practice, especially for large-scale programs. We have optimized our approach in several ways to improve the efficiency. For this research question, we will measure time usage of our approach on all the subjects.

\subsection{Experiment Setup}

In this subsection, an experiment is conducted to evaluate the diagnostic capability of our approach for real programs, and compare the effectiveness and efficiency against other baseline approaches. Our experiment are conducted on a Linux Server with a 3.00GHz Intel(R) Xeon(R) E5-2623 v3 CPU and 32GB physical memory. The operating system is Centos 7.0 and the compiler is GCC version 3.8.5 and JDK 1.7.0\_79. FSMFL and other existing fault localization approaches are implemented using the Go programming language. In the experiment, we compare FSMFL against Tarantula, Ochiai, OP2, DStar, GP13, Ample, Jaccard and Linear \cite{Dean}.


\subsubsection{Subject Programs}

\highlight{Our benchmarks consist of Siemens programs, Linux programs, Space program, JFreeChart program, Joda-Time program, Apache Commons Lang program, Apache Commons Math and Google's Closure program. The first three programs are downloaded from SIR \cite{Do2005}, the rest are downloaded from defects4j \cite{defects4j}. The Siemens programs have 174 to 539 lines of code (LOC), with 1052 to 5542 test cases in the test suite. The three programs for Linux are gzip (6576 LOC), grep (12635 LOC) and sed (7125 LOC). The Space program (9126 LOC) has 3814 test cases. JFreeChart programm (52104 LOC) has 2193 test cases. Joda-Time programm (13630 LOC) has 4041 test cases. Apache Commons Lang (11844 LOC) has 2291 test cases and Apache Commons Math (42684 LOC) has 4378 test cases and Google Closure program (47446 LOC) has 7911 test cases.}

\subsubsection{Fault Injection and Construction of Real Multi-Fault Versions}

\highlight{
	Empirical study in software testing is usually very difficult and unrealistic because real bugs are too infrequently used in software testing research~\cite{defects4j}. Extracting and reproducing real bugs is challenging and as a result manual faults or mutants generated by mutation testing are commonly used as a substitute.
}

	In order to build manual test suites. Versions with single fault are downloaded from SIR \cite{Do2005}, while versions with multiple faults are constructed by manually injecting faults which are based on original faults. 

\highlight{
	We also download 5 real large-scale programs from Defects4j \cite{defect4jtool}. On average there are more than 30000 LOC. The programs are JFreechart, Closure compiler, Apache commons-lang, Apache commons-math and Joda-Time that are maintained by Google Company, Apache Foundation, etc. All have a number of real faults.
}

\begin{figure}[H]
	\caption{No.5 bug of Apache commons-math Program.}
	\centering
	\includegraphics[width=\linewidth]{def4j1.eps}
	\label{f-def4j1}
\end{figure}
\begin{figure}[H]
	\caption{No.40 bug of Apache commons-math Program.}
	\centering
	\includegraphics[width=\linewidth]{def4j3.eps}
	\label{f-def4j2}
\end{figure}

\highlight{Defects4j collects both buggy and fixed program revisions for every fault. Figure \ref{f-def4j1} and Figure \ref{f-def4j2} show the detailed information of No.5 and No.40 bugs in Apache commons-math program. One the left the figure shows the buggy version and faulty lines, and on the right it gives the fixed version. As shown in the figure, a new combined multi-buggy version can be obtained by concatenating the faulty lines of two buggy versions manually. Real multi-fault programs collected from defects4j are used for verifying the effectiveness of FSMFL.}
	
\highlight{We evaluate the effectiveness of approaches with single-fault on all programs, and with multi-fault on four Siemens programs (i.e., print\_tokens, print\_tokens2, replace and tot\_info), three Linux programs Space program and five real programs in Defects4j. The number of bugs in multi-fault versions is two, three or five. The reason to exclude the other three Siemens programs in multi-fault experiment is that they have too few executable lines to generate enough multi-fault versions. Table \ref{programInfo} shows the characteristics of the subject programs. The first column lists the program names. The second column shows which project the program belongs to. The third column gives the number of single-fault and multi-fault versions for each program. The fourth column counts the LOC of the program. The fifth column counts the LOC that are covered by the test cases. The last column indicates the number of program's test case .}

\begin{table}[H]
	\scriptsize
	\centering
	\caption{Program Used in Experiment}
	
	\begin{tabular}{p{1.5cm}<{\centering}p{0.8cm}<{\centering}p{3.8cm}<{\centering}p{0.8cm}<{\centering}p{1.5cm}<{\centering}p{1cm}<{\centering}}
		\hline
		Program & Source & \#Single-Fault Version (\#Multi-fault Version) & All LOC & Executable LOC & \#Test Case \\
		\hline
		
		print\_tokens & Siemens &	20(33) &	539 &	203 &	4130 \\
		print\_tokens2 & Siemens &	20(36) &	489 &	201 &	4115 \\
		replace & Siemens	&   21(45) &	507 &	273 &	5542 \\
		schedule & Siemens &	13 &	397 &	166 &	2650 \\
		schedule2 & Siemens &	14 &	299 &	146 &	2710 \\
		tcas & Siemens &	18 &	174 &	73 &	1608 \\
		tot\_info & Siemens &	31(48) &	398 &	138 &	1052 \\
		gzip & Linux &	7(9) &	6576 &	1744 &	213 \\
		grep & Linux &	2(4) &	12635 &	3197 &	470 \\
		sed & Linux &	6(3) &	7125 &	2027 &	360 \\
		space & space &	37(39) &	9126 &	3814 &	13585 \\
		lang & Defect4J &	17(30) &	11844 &	1391 &	2291 \\
		chart & Defect4J &	9(5) &	52104 &	1193 &	2193 \\
		time & Defect4J &	6(4) &	13630 &	1131 &	4041 \\
		math & Defect4J &	13(6) &	42684 &	936 &	4378 \\
		closure & Defect4J &	3(1) &	47446 &	5631 &	7911 \\
		
		\hline
	\end{tabular}%
	\label{programInfo}
\end{table}

\subsection{\highlight{Optimization and Implementation}}
We consider the following aspects to improve the performance of FSMFL: \highlight{ memory usage, parallel computation and  algorithmic improvement.} Firstly, every candidate is represented by a bit string. We choose raw uint64 array as the basic data structure, by which any length of chromosomes can be easily represented and the storage space can be tremendously reduced. Secondly, using Go language and parallel computing technique, we significantly improve computational efficiency and speed up the crossover and mutation steps. As for the selection process, a combination of heap and quick sort is adopted to reduce time complexity of the algorithm to $ O(\log n) $.

With the optimized genetic algorithm implementation, the framework can support large-scale search space. \highlight{As for implementation, the population size is configured to 500 and every new generation generates 1000 children. A rank selection operator is used in the selection step. Both a single point crossover operator and a shuffle crossover operator are used in the crossover step. Single bit string mutation operator is adopted, which means each position has the same probability to be chosen to mutate. The crossover rate is set to 0.99 and mutation rate is set to 0.05. All the algorithm's parameters such as population size, crossover rate, mutation rate and etc are empirically chosen according to the standard genetic algorithm \cite{winter1996genetic}. The value of these parameters can achieve competitive results in our empirical study.}

\subsection{Performance Metrics for Evaluation}

In single-fault localization problem, the $EXAM$ \cite{Wong} metric is usually used to evaluate the effectiveness of the result. The metric measures the percentage of the entities one need to examine before finding the faulty entity. For a specific program, the localization result is better when the $EXAM$ value is smaller. However, $EXAM$ metric cannot be applied to multi-fault localization problem that considers all fault locations at the same time. Abreu et al.\cite{Abreua}  proposed a wasted effort metric to evaluate their multi-fault localization result, but they only consider the statements in their result set. We define two new metrics for multi-fault localization problem by extending $EXAM$ to $EXAM_F$ and $EXAM_L$.

\paragraph{Definition 4($EXAM_F$)} It represents the percentage of entities that have to be examined until the first faulty entity is found. For example, Tarantula's $EXAM_F$ can be calculated by 4/17 in Figure \ref{example_figure}.

\paragraph{Definition 5($EXAM_L$)} It represents the percentage of entities that have to be examined until the last faulty entity is found. For example, Tarantula's $EXAM_L$ can be calculated by 10/17 in Figure \ref{example_figure}.

$EXAM_F$ is more useful in the scenario of finding a fault one at a time. On the other hand, $EXAM_L$ is more suitable in the scenario of finding all faults at the same time, such as finding faults for compiler programs. A smaller $EXAM_L$ means that the locations of all the faults have good rankings in the suspicious ranking list.

In this section, both $EXAM_F$ and $EXAM_L$ are used to compare the performance of our approach with others on multi-fault versions.

\subsection{Hypothesis Testing Methods}
\highlight{A group of experiments between our approach and existing baseline approaches are conducted on programs with both single-fault and multi-fault. Meanwhile, several statistical methods in the following are adopted to further analyze the experimental results.}

\highlight{These hypothesis testing methods are ANOVA, LSD and Bonferroni Correction.}
\highlight{\begin{itemize}
	\item[1.] Analysis of variance (ANOVA) \cite{Friedman1939} is used to analyze the difference among group means and their variance. ANOVA provides a statistical test on whether the means of several groups are equal. ANOVAs are useful for comparing three or more means for statistical significance.
	\item[2.] Least Significant Difference (LSD) \cite{Wilcoxon1992} is a measure of how much of a difference between means must be observed before one can draw a conclusion that the means are significantly different. However, this technique can be used only when ANOVA result is significant.
	\item[3.] Bonferroni Correction \cite{benjamini2001the} is used to counteract the problem of multiple comparisons.
\end{itemize}}

\highlight{Experimental process includes the following:}
\highlight{\begin{itemize}
	\item[1.] Calculating the average $EXAM$ value of FSMFL and other baseline approaches by executing each programs 30 times.	
	\item[2.] Using ANOVA test method to analyze the difference between the means of  all the approaches and compute $EXAM$ values and variance.
	\item[3.] Only under the circumstance that ANOVA result is significant, LSD can be used for measuring the degree of the difference between two specific approaches and decide which approach is better.
	\item[4.] Bonferroni Correction \cite{benjamini2001the} is used to counteract the problem of multiple comparisons. Bonferroni is another multiple comparisons method like LSD. It also can only be used when the ANOVA result is significant. The usage is similar to LSD, but Bonferroni cares more about the false discovery rate.	
	\item[5.] Visualizing the $EXAM_F$ and $EXAM_L$ experimental results.
\end{itemize}}

\subsection{Experiments for Single-Fault Versions}

\highlight{Though FSMFL aims at solving the multi-fault localization problem, it can also be applied to programs with only a single fault. In this subsection, we compare the effectiveness of FSMFL algorithm with Tarantula, Ochiai and OP2. As shown in Table \ref{programInfo} we choose 189 single-fault versions of C programs from Siemens, Linux and space suite, 48 single-fault versions of Java programs from Defect4j. We execute 30 times for each program and calculate the average $EXAM$. Based on these $EXAM$ values, following hypothesis tests are conducted to verify the effectiveness between FSMFL and others approaches.}

\subsubsection{Result Analysis with Single-Fault Versions}
\highlight{The results on single-fault versions are shown in Figure \ref{singleResultFigure}. They give the percentage of faulty versions whose scores are within the specified segment. Each segment is 1 percentage point, for example 0\%-1\% or 43\%-44\%.}

\highlight{It can be observed from Figure \ref{singleResultFigure} that OP2, Ochiai, DStar, Ample have a better performance. FSMFL and Linear are close to Tarantula that can localize nearly same percentage of faults within 20\%, 40\%, 60\% entities. Jaccard and GP13 have poor performance when handling single-fault problems.}

\highlight{Statistical hypothesis test results indicate that FSMFL does not have a worse performance than other single-fault localization approaches.}
\begin{figure}[H]
	\centering
	\includegraphics[width=\linewidth]{Single-fault.eps}
	\caption{Visualization of Single-fault Localization Result}
	\label{singleResultFigure}
\end{figure}

\subsubsection{Hypothesis Test with Single-Fault Versions}
\highlight{ANOVA analysis is conducted among 9 approaches on $EXAM$ values. Detailed results are shown in Table \ref{singleANOVA}. The \textit{p-value} is 1.49622*$e^{-99}$, far smaller than 0.05. According to hypothesis testing, we reject the original hypothesis based on 95 percent probability. Such a small \textit{p-value} usually means the difference among 9 approaches is significant. More statistical hypothesis test experiments such as LSD and Bonferroni can be conducted for  confirmation.}

\begin{table}[H]
	\centering
	\caption{ANOVA Test Result on Single-fault Version}
	\begin{tabular}{cccccc}
		\hline
		Source & SS & DF & MS & Chi-sq & P-Value \\
		\hline
		Columns & 2702.28 & 8 & 337.785 & 451.83 & 1.49622e-99 \\
		Error & 8445.72 & 1856 & 4.55 & & \\
		Total & 11148 & 2096 & & & \\
		\hline
	\end{tabular}%
	\label{singleANOVA}
\end{table}

\highlight{The LSD comparison exp eriments with 0.05 significant level are conducted and the results are show in Figure \ref{singleLSDBon}. The lower the LSD value, the more significantly the first approach is better than the second one. In Figure \ref{singleLSDBon}, 9 approaches (Tarantula, Ochiai, OP2, DStar, GP13, Ample, Jaccard, Linear, FSMFL) are represented in y-axis. A smaller value in x-axis means the corresponding approach is better. Figure \ref{singleLSDBon} also shows the Bonferroni test result that has the similar criteria with LSD.}

\begin{figure}[H]
	\centering
	\includegraphics[width=\linewidth]{Single-fault-LSD-Bonferroni.eps}
	\caption{LSE and Bonferroni Result on Single-Fault Versions}
	\label{singleLSDBon}
\end{figure}

\highlight{In Figure \ref{singleLSDBon}, we can get same result that Ochiai, OP2, DStar, GP13 and Ample belong to the first echelon, FSMFL, Linear and Tarantula belong to the second echelon and Jaccard gives the worst result. As the LSD result shows, although FSMFL does not have the best result among 9 approaches, it is still an acceptable and feasible approach even for  single-fault localization.}

\begin{table}[htbp]
	\scriptsize
	\centering
	\caption{Multiple Comparison For Single-fault Versions}
	\label{single-compare-table}
	\begin{tabular}{p{1cm}<{\centering}p{1cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{1cm}<{\centering}}
		\hline
		& Tarantula & Ochiai & OP2 & Dstar & GP13 & Ample & Jaccard & Linear & FSMFL \\
		\hline
		
		
		Tarantula &           & \textsurd & \textsurd & \textsurd & \textsurd & \textsurd &           &           &           \\
		Ochiai    &           &           & \textsurd &           &           & \textsurd &           &           &           \\
		OP2       &           &           &           &           &           &           &           &           &           \\
		Dstar     &           &           & \textsurd &           &           & \textsurd &           &           &           \\
		GP13      &           &           & \textsurd &           &           & \textsurd &           &           &           \\
		Ample     &           &           &           &           &           &           &           &           &           \\
		Jaccard   & \textsurd & \textsurd & \textsurd & \textsurd & \textsurd & \textsurd &           & \textsurd & \textsurd \\
		Linear    &           & \textsurd & \textsurd & \textsurd & \textsurd & \textsurd &           &           &           \\
		FSMFL     &           & \textsurd & \textsurd & \textsurd & \textsurd & \textsurd &           &           &           \\
		
		\hline
	\end{tabular}
\end{table}

\highlight{Table \ref{single-compare-table} further shows the effectiveness on single-fault localization among these approaches. In the table, a check mark means that the approach above is better than the  approach on the left. If a approach has more check marks in its column, it performs better in this metric. Similarly, if a approach has more check marks in its row, it performs worse in this metric. We get the same result that FSMFL, Linear and Tarantula belong to the second echelon. Thus FSMFL can also be used to solve single-fault localization problem.}

\subsection{Experiments for Multiple-Fault Versions}
\highlight{The main goal of FSMFL is for multi-fault localization. A set of experiments are conducted in this section. Specifically, $EXAM_F$ and $EXAM_L$ are used for evaluating and comparing the effectiveness between FSMFL and other baseline approaches. Thorough analysis of the experimental results is discussed to support the conclusion that FSMFL performs well in multi-fault localization, especially under $EXAM_L$ metric.}

\highlight{As shown in Table \ref{programInfo}, there are 217 multi-fault versions of C programs (from Siemens, Linux and space), 46 multi-fault versions of Java programs (from Defect4j). We execute each program for 30 times and calculate the average $EXAM_F$ and $EXAM_L$ to compare the approaches. Based on the $EXAM_F$ and $EXAM_L$ values, hypothesis test is conducted to verify the effectiveness of FSMFL.}

\subsubsection{Result Analysis with Multiple-Fault Versions}

\highlight{As shown in Table \ref{programInfo}, we choose totally 189 single-fault versions and 217 multi-fault versions of C programs, 48 single-fault versions and 46 multi-fault versions of Java programs to evaluate the approaches by considering both $EXAM_F$ and $EXAM_L$ metrics. We execute each program 30 times by each of the 9 approach and calculate the average of $EXAM_F$ and $EXAM_L$ values.}

\highlight{Figure \ref{MultiResultFigure} depicts the results of 9 different fault localization approaches on multi-fault versions. It can be observed that OP2 does not perform as well as in single-fault localization experiments.}

\begin{figure}[htbp]
	\centering
	\includegraphics[width=1.0\textwidth]{Multi-fault-EXAMF-EXAML.eps}
	\caption{Visualization of Multi-fault Localization Result}
	\label{MultiResultFigure}
\end{figure}

\highlight{Sub-figures in the left of Figure \ref{MultiResultFigure} shows that FSMFL works as well as other approaches (Tarantula, Ochiai, GP13, Ample, Linear) in finding the first fault in multi-fault problems using $EXAM_F$ metric. It indicates that FSMFL is a feasible approach for finding the first fault under multi-fault problems. As for the $EXAM_F$ experiments, it can be observed from  the right of the Figure \ref{MultiResultFigure} that FSMFL and Linear algorithm are better than other approaches.}

\highlight{For a further verification, statistical hypothesis test methods  (ANOVA, LSD and Bonferroni) are adopted to check if the difference in performance is significant in the following section.}

\subsubsection{Hypothesis Test with Multiple-Fault Versions}
\highlight{ANOVA is conducted for hypothesis test of 9 approaches. The \textit{p-value} of two $EXAM$ metrics($EXAM_F$, $EXAM_L$) are listed in Table \ref{EXAMFANOVA} and \ref{EXAMLANOVA}. Both \textit{p-values} ($ 4.28756 *e^{-99} $ and $ 6.09209*e^{-89} $) are much less than 0.05, which indicates 9 approaches have significant difference no matter using $EXAM_F$ or $EXAM_L$.}

\begin{table}[H]
	\scriptsize
	\caption{ANOVA Test Result on Multi-Fault Versions using $EXAM_F$}
	\centering
	\begin{tabular}{cccccc}
		\hline
		Source & SS & DF & MS & Chi-sq & P-Value \\
		\hline
		Columns & 3123.9 & 8 & 390.482 & 482.36 & 4.28756e-99 \\
		Error & 10554.1 & 2104 & 5.016 & & \\
		Total & 13678 & 2375 & & & \\
		\hline
	\end{tabular}%
	\label{EXAMFANOVA}
\end{table}

\begin{table}[H]
	\scriptsize
	\centering
	\caption{ANOVA Test Result on Multi-Fault Versions using $EXAM_L$}
	\begin{tabular}{cccccc}
		\hline
		Source & SS & DF & MS & Chi-sq & P-Value \\
		\hline
		Columns & 2681.98 & 8 & 335.248 & 434.98 & 6.09209e-89 \\
		Error & 10340.02 & 2104 & 4.914 & & \\
		Total & 13022 & 2375 & & & \\
		\hline
	\end{tabular}%
	\label{EXAMLANOVA}
\end{table}

\highlight{LSD and Bonferroni test results are shown in Figure \ref{EXAMFLSDBON} using $EXAM_F$ and Figure \ref{EXAMLLSDBON} using $EXAM_L$. Both seeded programs (Siemens and Linux) and large-scale real programs(Defects4j) are used to verify FSMFL and other baseline approaches separately. We analyze the result in details in the following section.}

\textbf{Hypothesis test using $EXAM_F$ metric.}

\highlight{In Figure \ref{EXAMFLSDBON}, two sub-figures on the top give experimental results on seeded programs while two sub-figures at the bottom show the experimental results on large-scale real programs.}
\highlight{Two sub-figures on the left are LSD test while two on the right are Bonferroni test. As previously explained, Tarantula, Ochiai, OP2, DStar, GP13, Ample, Jaccard, Linear, FSMFL are listed in the y-axis. The smaller value in x-axis means the better the corresponding approach.}
\highlight{In Figure \ref{EXAMFLSDBON}, Jaccard, OP2 and Dstar have high mean values and are significantly different from the other 6 approaches.}
\begin{figure}[H]
	\centering
	\includegraphics[width=\linewidth]{Multi-faults-LSD-Bonferroni-EXAMF}
	\caption{LSD and Bonferroni Result on Multi-fault Versions using $EXAM_F$}
	\label{EXAMFLSDBON}
\end{figure}
\highlight{The LSD and Bonferroni results indicate that Jaccard, OP2 and Dstar have worse performance in finding the first fault ($EXAM_F$) than the other no matter on seeded or real large-scale programs.  FSMFL  is similar to other 5 approaches. Although FSMFL is not the best approach in $EXAM_F$,  it still can find first fault as effectiveness as the other 5 approaches in practice.}

\begin{table}[htbp]
	\scriptsize
	\centering
	\caption{Multiple Comparison For Multi-Fault Versions using $EXAM_F$}
	\label{multi-EXAMF-compare-table}
	\begin{tabular}{p{1cm}<{\centering}p{1cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{1cm}<{\centering}}
		\hline
		& Tarantula & Ochiai & OP2 & Dstar & GP13 & Ample & Jaccard & Linear & FSMFL \\
		\hline
		
		Tarantula &           &           &           &           &           &           &  &           &           \\
		Ochiai    &           &           &           &           &           &           &  &           &           \\
		OP2       & \textsurd & \textsurd &           & \textsurd & \textsurd &           &  & \textsurd & \textsurd \\
		Dstar     & \textsurd & \textsurd &           &           &           &           &  & \textsurd & \textsurd \\
		GP13      & \textsurd & \textsurd &           &           &           &           &  & \textsurd & \textsurd \\
		Ample     & \textsurd & \textsurd &           & \textsurd & \textsurd &           &  & \textsurd & \textsurd \\
		Jaccard   & \textsurd & \textsurd & \textsurd & \textsurd & \textsurd & \textsurd &  & \textsurd & \textsurd \\
		Linear    &           &           &           &           &           &           &  &           &           \\
		FSMFL     &           &           &           &           &           &           &  &           &           \\
		
		\hline
	\end{tabular}
\end{table}

\highlight{Table \ref{multi-EXAMF-compare-table} shows the effectiveness measured by $EXAM_F$ among these approaches. It can be observed that FSMFL, Tarantula, Linear and Ochiai perform best.}

\textbf{Hypothesis test using $EXAM_L$ Metric}

\highlight{Two sub-figures on the top in Figure \ref{EXAMLLSDBON} give experimental results on seeded programs using $EXAM_L$, and two at the bottom on large-scale real programs. Two sub-figures on the left show LSD test while two on the right Bonferroni test.}

\highlight{Figure \ref{EXAMLLSDBON} indicates that both FSMFL and Linear approaches have good result and perform significantly better than other 7 approaches in terms of $EXAM_L$ metric.
Jaccard, OP2 and Dstar have high mean value show in Figure \ref{EXAMLLSDBON} and are significantly different from the other 6 approaches. The experimental results indicate FSMFL usually outperforms other approaches in terms of $EXAM_F$.}

\begin{figure}[H]
	\centering
	\includegraphics[width=\linewidth]{Multi-faults-LSD-Bonferroni-EXAML.eps}
	\caption{LSD and Bonferroni Result on Multi-fault Versions using $EXAM_L$}
	\label{EXAMLLSDBON}
\end{figure}

\begin{table}[htbp]
	\scriptsize	
	\centering
	\caption{Multiple Comparison For Multi-Fault Versions using $EXAM_L$}
	\label{multi-EXAML-compare-table}
	\begin{tabular}{p{1cm}<{\centering}p{1cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{0.7cm}<{\centering}p{1cm}<{\centering}}
		\hline
		& Tarantula & Ochiai & OP2 & Dstar & GP13 & Ample & Jaccard & Linear & FSMFL \\
		\hline
		
		
		Tarantula &                   & \textsurd &           &           &           &           &  & \textsurd & \textsurd \\
		Ochiai    &           &           &           &           &           &           &  & \textsurd & \textsurd \\
		OP2       & \textsurd & \textsurd &           & \textsurd & \textsurd &           &  & \textsurd & \textsurd \\
		Dstar     & \textsurd & \textsurd &           &           &           &           &  & \textsurd & \textsurd \\
		GP13      & \textsurd & \textsurd &           &           &           &           &  & \textsurd & \textsurd \\
		Ample     &  \textsurd & \textsurd & \textsurd & \textsurd & \textsurd &           &  & \textsurd & \textsurd \\
		Jaccard   & \textsurd & \textsurd & \textsurd & \textsurd & \textsurd & \textsurd &  & \textsurd & \textsurd \\
		Linear    &           &           &           &           &           &           &  &           & \textsurd \\
		FSMFL     &           &           &           &           &           &           &  &           &           \\
		
		\hline
	\end{tabular}
\end{table}

\highlight{Table \ref{multi-EXAML-compare-table} gives a clear picture of comparison. The notations in the table has been described in  4.5. We can see that FSMFL outperforms others in terms of $EXAM_L$.}

\highlight{Hypothesis test indicates that FSMFL has similar performance with that of other approaches (Tarantula, Ochiai, GP13, Ample, Linear) in finding the first fault in multi-fault localization in terms of $EXAM_F$. Meanwhile, FSMFL and Linear outperform the rest in terms of $EXAM_F$.}

\highlight{In summary, from our experiments we can draw a conclusion that FSMFL is more effective in finding all the faulty statements at the same time.}

\subsection{Efficiency}
\highlight{The time required for a fault localization approach includes two parts. Part I involves data collection. Part II uses the data collected in Part I to locate faults. All spectrum-based fault localization approaches require the same set of data, so they have same cost in Part I. Part II contains two processes. In the first process, time is spent on loading  coverage information and collecting execution results. In the second process, time is spent on the computation and evolution of the genetic algorithm. Table \ref{t-computational-time} gives the computational time by Part II of FSMFL on Siemens, Linux and space and Defect4j suites. The computational time is the average of 100 exeuctions for every faulty version of the program.  It can be observed that the computational time is less than 23 seconds even for large-scale programs, which is acceptable in the real development environment.}

\highlight{It can also observed that the computational time increases with executable LOC (i.e. $LOC$) and the number of test cases (i.e. $TestCaseNumber$). We use the least-square method \cite{stigler1981gauss} to perform a correlation test on all the executions and get a fit function as following:}
 
\begin{equation}
	Time= 6.12 \times LOC  + 0.04 \times TestCaseNumber
\end{equation}


\begin{table}[H]
	\scriptsize
	\centering
	\caption{The computational time by Part II of FSMFL}
	\begin{tabular}{p{1.5cm}<{\centering}p{1.5cm}<{\centering}p{1.5cm}<{\centering}p{3cm}<{\centering}p{1.5cm}<{\centering}}
		\hline
		Program & Executable LOC & Test cases number & FSMFL using go \\
		\hline
		tot\_info &	138 &	1052 &	1.15s \\
		print\_tokens2 &	201 &	4115 &	1.76s \\
		print\_tokens &	203 &	4130 &	2.48s \\
		gzip &	1744 &	213 &	2.02s \\
		sed &	2027 &	360 &	1.97s \\
		replace &	273 &	5542 &	3.47s \\
		grep &	3197 &	470 &	2.03s \\
		space &	3814 &	13585 &	22.97s \\
		lang &	1391 &	2291 &	0.9s \\
		chart &	1193 &	2193 &	0.6s \\
		time &	1131 &	4041 &	1.1s \\
		math &	936 &	4378 &	0.9s \\
		closure &	5631 &	7911 &	12.9s \\
		\hline
	\end{tabular}
	\label{t-computational-time}
\end{table}

\subsection{Summary}
For the RQ1, our approach is compared against Tarantula, Ochiai, OP2, DStar, GP13, Ample, Jaccard and Linear approaches.  The result shows that FSMFL has better performance on subjects with multiple faults and has similar performance on subjects with one fault. \highlight{For RQ2, statistical hypothesis methods like ANOVA, LSD and Bonferroni Correction are used for measuring the difference between FSMFL and other baseline approaches. The results indicate that FSMFL is indeed significantly different from the others.} For RQ3, we evaluate  time usage of FSMFL and confirm that the cost is acceptable. 

\subsection{Threats to Validity}
In this subsection, we discuss the potential threats to validity in our empirical study.

\highlight{Threats to external validity are about whether the observed experimental results can be generalized to other subjects. To guarantee the representativeness, we choose large number of programs from the widely used Simens, Linux and Defect4J \cite{defects4j}\cite{defect4jtool} suites. These subjects include large-scale programs with over 50000 LOC, and industrial programs such as “Apache Commons-Lang”, “Apache Commons-Math”, “JFreeChart” and “Closure Compiler”. Our experiments consider both C and Java programs and include both artifical and real  bugs. We realize that there is no perfect empirical study and there must be some tricky programs and bugs not considered by our experiments. We plan to enlarge our subjects in our future work.}

\highlight{Threats to internal validity are mainly concerned with the uncontrolled internal factors that might have influence on the experimental results. The main internal threat is the potential errors in  our  implementation. To reduce this threat, pair programming is used and experimental results are carefully examined. Secondly, parameters are empirically chosen according to the standard GA algorithm \cite{Mitchell1998}. After testing different sets of parameters, we come to the conclusion that different parameter values have very little effect on the experimental results. 
}

\highlight{Threat to construct validity are about whether the performance metrics used in the empirical studies reflect the real-world situation. In real cases, programs commonly have more than one faults. We believe $EXAM_F$ is a reasonable metric  to measure the effectiveness of locating the first fault. Similarly, $EXAM_L$ is a reasonable metric  to measure the effectiveness of locating all the faults. In order to verify FSMFL is indeed statistically different from other baseline approaches, ANOVA, LSD \cite{Wilcoxon1992} and Bonferroni Correction \cite{benjamini2001the} are adopted to conduct thorough experiments. We realize that design of effective metric to evaluate the effectiveness of multi-fault localization problem is still in its infancy. Therefore designing more reasonable metric is our future work. 
}
	
\section{Related Work}	
There is a large body of work on automatically localizing the faulty statements in a piece of software code. With the increase of software complexity, software faults become more difficult to be identified, which directly leads to the increase of debugging cost \cite{Wong2016}. Dynamic fault localization \cite{Jones,Xie2013} localizes faulty statements by leveraging the execution information from test case execution when abnormal behavior was detected during testing. Among them, spectrum-based fault localization approaches have shown their effectiveness and efficiency in locating faults automatically \cite{Wong2016}. Spectrum-based fault localization works by ranking the program statements by its suspicious score which can be calculated based on the spectrum information. Numerous spectrum-based fault localization approached have been proposed to localize multiple faults and widely used in the research and industry area. 


\highlight{Under such situation, Jones et al. \cite{Jones2007} introduced an approach to cluster failed test case into groups, and localize each fault respectively by different developers using single-fault localization approach. It is semi-automatic and its effectiveness is related to the accuracy of clustering.} In addition, Abreu and his colleagues \cite{Abreua} proposed BARINEL which combines spectrum-based fault localization and model-based diagnosis. BARINEL uses fault candidates and Bayesian Reasoning to deduce entities probabilities. Long Zhang \cite{ZHANG201735} reported a comprehensive study to investigate the impact of cloning the failed test cases on the effectiveness of SFL techniques. Perez\cite{Perez:2017:TDM:3097368.3097446} proposed a new metric, called DDU, for spectrum-based fault localization approaches with high accuracy. Moreover, Dean et al. \cite{Dean} presented a multi-fault localization approach based on a linear programming model. However, the suspicious function they used must be converted to a linear model, so it cannot be used if a conversion to linear model is infeasible. \highlight{Compared with their approach, ours used a better suspicious function and both linear or non-linear models can be plugged into FSMFL. Steimann and Frenkel \cite{Steimann,Steimanna} proposed a coverage-based locator for multiple faults by assuming a probability distribution of the number of faults and then they apply some techniques to reduce algorithm complexity. However, the algorithm is  exponential and the experiments confirms that it is not scalable. Gong et al. \cite{Gong} developed an indicator which can adapt existing single-fault localization approach to solve multi-fault localization problem. But in the process, it needs to interact with a programmer to identify the faults. The approach we proposed is a new multi-fault localization approach based on genetic algorithm. We design a weighted fitness function and re-implement the genetic algorithm using different strategy. It can also be extended.}

Many popular fault localization approaches are optimized for programs that only one fault exists \cite{Hamill2009,Lucia}. To address this limitation, some researchers start to investigate the relations between multiple faults and the influence of the coexistence of multiple faults on the effectiveness of fault localization approaches \cite{Debroy,DiGiuseppe2011,Xue}. Their results imply that different faults may interfere with each other and significantly decrease the localizing accuracy of the existing single-fault oriented localization approaches. 

\highlight{In recent years, researchers care more about the actual effectiveness of spectrum-based fault localization in real programs \cite{Song2014,Pearson2016,Pearson2017,Xia2016ICSME}.Some research \cite{Lucia,Yoo2014,Kim} shows that fault localization can facilitate debugging activities in some extent, but can still be improved in many aspects \cite{Steimannb}. Tang et al. \cite{Tang,Tanga} proposed some more practical fault localization approaches based on both coverage and version information. Sun and Podgurski \cite{Sun2016ICST} investigated some coverage-based statistical fault localization metrics and found the common properties of most effective metrics. Moreover, Tang and Chan \cite{7918543} proposed an empirical framework of accuracy graphs to reveal the relative accuracy of formulas. This framework makes it possible to reveal accuracy relationships among the formulas which have not been discovered by theoretical analysis.} 

Meanwhile, using meta-heuristic algorithms to solve software engineering problems is a popular idea. It has been successfully applied to solve problems throughout the software engineering lifestyle, especially in testing and debugging \cite{Harman2012}. Shin \cite{Yoo2012} first used genetic programming to design risk evaluation formula for spectrum-base fault localization. Utilizing heuristic algorithm can give us a wider vision to solve the fault localization problem.

\highlight{There is also a large body of work on automated program repair. Fault localization result is the basis of automatic program repair \cite{Gong,Debroy2014}, which aims at generating patches automatically to fix faults in software. As most automated program repair (APR) tools apply fault localization (FL) techniques to identify the locations of likely faults to be repaired. Assiri \cite{Assiri2017} conducted a controlled experiment to evaluate the impact of ten FL techniques on APR effectiveness, performance, and repair correctness. According to their experimental results, The effectiveness, performance, and repair correctness of APR depends on the FL method used. If FL does not identify the location of a fault, the application of an APR tool will not be effective and will fail to repair the fault. Moreover, automatic program repair result can evaluate the effectiveness of fault localization techniques \cite{Qi2013}. Automatic program repair attempts to fix all faults together, so it is necessary to give all potential faults a higher suspicious score.}

\section{Conclusion}
\highlight{This paper presents FSMFL, a genetic algorithm based framework for multi-fault localization. A fitness function is designed to evaluate the combinations of program entities. Statistical hypothesis tests are conducted to compare FSMFL against other spectrum-based fault localization approaches. Experiments show that FSMFL is competitive in single-fault localization, and superior in multi-fault localization. We have also developed optimization techniques to improve the efficiency of FSMFL. Our extensive  experiments  show that FSMFL is effective and efficient in multi-fault localization.}
	
\section*{Acknowledgement}
This work was partially supported by the National Natural Science Foundation of China(NO. 61202030,61202006,71502125).
	
\section*{References}	
\bibliography{mybibfile}
\end{document}