% THIS IS SIGPROC-SP.TEX - VERSION 3.1
% WORKS WITH V3.2SP OF ACM_PROC_ARTICLE-SP.CLS
% APRIL 2009
%
% It is an example file showing how to use the 'acm_proc_article-sp.cls' V3.2SP
% LaTeX2e document class file for Conference Proceedings submissions.
% ----------------------------------------------------------------------------------------------------------------
% This .tex file (and associated .cls V3.2SP) *DOES NOT* produce:
%       1) The Permission Statement
%       2) The Conference (location) Info information
%       3) The Copyright Line with ACM data
%       4) Page numbering
% ---------------------------------------------------------------------------------------------------------------
% It is an example which *does* use the .bib file (from which the .bbl file
% is produced).
% REMEMBER HOWEVER: After having produced the .bbl file,
% and prior to final submission,
% you need to 'insert'  your .bbl file into your source .tex file so as to provide
% ONE 'self-contained' source file.
%
% Questions regarding SIGS should be sent to
% Adrienne Griscti ---> griscti@acm.org
%
% Questions/suggestions regarding the guidelines, .tex and .cls files, etc. to
% Gerald Murray ---> murray@hq.acm.org
%
% For tracking purposes - this is V3.1SP - APRIL 2009

\documentclass{acm_proc_article-sp}

%\hyphenation{analy-sis  gen-era-te au-to-mat-ic ver-i-fi-ca-tion in-fra-struc-ture gen-er-a-tion nec-es-sary con-cep-tu-al-ly meth-od prob-lem cov-er-age man-u-al-ly de-vel-op-er}

%To set the listing package 
\lstset{ %
  language=Java,                % the language of the code
  basicstyle=\scriptsize,           % the size of the fonts that are used for the code
  numbers=left,                   % where to put the line-numbers
  numberstyle=\tiny\color{gray},  % the style that is used for the line-numbers
  stepnumber=1,                   % the step between two line-numbers. If it's 1, each line 
                                  % will be numbered
  numbersep=5pt,                  % how far the line-numbers are from the code
  backgroundcolor=\color{white},      % choose the background color. You must add \usepackage{color}
  showspaces=false,               % show spaces adding particular underscores
  showstringspaces=false,         % underline spaces within strings
  showtabs=false,                 % show tabs within strings adding particular underscores
  frame=shadowbox,                   % adds a frame around the code
  rulecolor=\color{black},        % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. commens (green here))
  tabsize=2,                      % sets default tabsize to 2 spaces
  captionpos=b,                   % sets the caption-position to bottom
  breaklines=true,                % sets automatic line breaking
  breakatwhitespace=false,        % sets if automatic breaks should only happen at whitespace
 % title=\lstname,                   % show the filename of files included with \lstinputlisting;
                                  % also try caption instead of title
  keywordstyle=\color{magenta},          % keyword style
  commentstyle=\color{blue},       % comment style
  stringstyle=\color{red},         % string literal style
  escapeinside={\%*}{*)},            % if you want to add a comment within your code
  morekeywords={*,...}               % if you want to add more keywords to the set
}

\renewcommand{\lstlistingname}{Code}

\begin{document}

\title{Precise Identification of Problems for Structural Test Generation
\titlenote{The original version of this paper is available as
\textit{Precise Identification of Problems for Structural Test Generation} from
Xushen Xiao, Tao Xie, et al.}}
%
% You need the command \numberofauthors to handle the 'placement
% and alignment' of the authors beneath the title.
%
% For aesthetic reasons, we recommend 'three authors at a time'
% i.e. three 'name/affiliation blocks' +be placed beneath the title.
%
% NOTE: You are NOT restricted in how many 'rows' of
% "name/affiliations" may appear. We just ask that you restrict
% the number of 'columns' to three.
%
% Because of the available 'opening page real-estate'
% we ask you to refrain from putting more than six authors
% (two rows with three columns) beneath the article title.
% More than six makes the first-page appear very cluttered indeed.
%
% Use the \alignauthor commands to handle the names
% and affiliations for an 'aesthetic maximum' of six authors.
% Add names, affiliations, addresses for
% the seventh etc. author(s) as the argument for the
% \additionalauthors command.
% These 'additional authors' will be output/set for you
% without further effort on your part as the last section in
% the body of your article BEFORE References or any Appendices.

\numberofauthors{2} %  in this sample file, there are a *total*
% of EIGHT authors. SIX appear on the 'first-page' (for formatting
% reasons) and the remaining two appear in the \additionalauthors section.
%
\author{
% You can go ahead and credit any number of authors here,
% e.g. one 'row of three' or two rows (consisting of one row of three
% and a second row of one, two or three).
%
% The command \alignauthor (no curly braces needed) should
% precede each author name, affiliation/snail-mail address and
% e-mail address. Additionally, tag each line of
% affiliation/address with \affaddr, and tag the
% e-mail address with \email.
%
% 1st. author
\alignauthor
Victor A. Arrascue Ayala\\
       \affaddr{University of Freiburg}\\
       \affaddr{Software Engineering}\\
       \affaddr{Georges-Köhler-Allee 106}\\
       \affaddr{79110 Freiburg}\\
       \affaddr{Germany}\\
       \email{ayalav@tf.uni-freiburg.de}
% 2nd. author
\alignauthor
Anas Alzoghbi\\
       \affaddr{University of Freiburg}\\
       \affaddr{Software Engineering}\\
       \affaddr{Georges-Köhler-Allee 106}\\
       \affaddr{79110 Freiburg}\\
       \affaddr{Germany}\\
       \email{anasalzogbi@yahoo.com}
}
% There's nothing stopping you putting the seventh, eighth, etc.
% author on the opening page (as the 'third row') but we ask,
% for aesthetic reasons that you place these 'additional authors'
% in the \additional authors block, viz.
%\additionalauthors{Additional authors: John Smith (The Th{\o}rv{\"a}ld Group,
%email: {\texttt{jsmith@affiliation.org}}) and Julius P.~Kumquat
%(The Kumquat Consortium, email: {\texttt{jpkumquat@consortium.net}}).}
%\date{30 July 1999}
% Just remember to make sure that the TOTAL number of authors
% is the number that will appear on the first page PLUS the
% number that will appear in the \additionalauthors section.

\maketitle
\begin{abstract}
The use of automated approaches to generate test-cases for parameterized
unit tests (PUT) has considerably reduced the cost of manual verification.
However, automated test-generation tools (ATGT) are not able in practice to achieve high structural 
coverage in complex projects. ATGTs might fail in producing
test-cases leaving some of the statements or branches uncovered. This is mainly due to two problems.
First, an object part of the test-case must be set, but the sequence of calls to recursively
build the object and its fields is at some point of the hierarchy unavailable. This is known as the ``Object
Creation Problem".
The second problem might appear whenever the invocation of a method belongs to an 
external library and at the same time, the returned value is used to determine a branch in a decision point (if statement). 
This is known as the ``External-Method Call Problem".

To further increase the structural coverage, developers can instrument the tool and provide the
objects whose creation failed during the execution. However, the list of problems returned by ATGTs could be simply 
too big to be manually explored. Moreover, not all the problems prevent from achieving a high structural coverage.
We propose Covana, a novel approach, which allows for an efficient co-operative developer testing. 
It collects event information and uses the test-cases resulting from the tool's execution and
provides a set of relevant EMCPs and OCPs.
\end{abstract}

%A category including the fourth, optional field follows...
\category{D.2.5}{Software Engineering}{Testing and Debugging}

\terms{Measurement, Reliability}

\keywords{Structural test generation, dynamic symbolic execution, data dependency, 
problem identification} % NOT required for Proceedings

\begin{figure*}[htb]
\centering
\includegraphics[width=0.6\textwidth]{pics/dse.eps}
\caption{DSE conceptual diagram}
\label{condse}
\end{figure*}

\section{Introduction}
Verification and validation are important processes and have a place
on each stage of the software development life cycle. In order to implement
an efficient software testing infrastructure, billions of dollars are spent. For 
some systems the cost could even reach 50\% of the development's cost \cite{sommerville}.

Unit testing is the most important part of software testing because it's able to identify ca. 2/3 of defects, 50\% of 
which come from white-box structural tests \cite{sommerville}. However,
the external environment needed to isolate and
execute the units is hard to simulate.
This has motivated the integration of the Automated Testing Life-Cycle Methodology (ATLM)
with the Software Development Life-Cycle. Automated Test Generation tools (ATGT)
are used to generate test-cases, which are then used by PUTs whose goal
is to maximize the structural coverage.

Our approach focuses on one state-of-the-art approach for generating test-cases
called dynamic symbolic execution \newline(DSE). This approach consists in interleaving 
concrete execution with symbolic execution, being this the reason why it is also 
called ``Concolic Testing"  (\textit{portmanteau} of concrete and symbolic). Although this approach allows achieving a high 
structural coverage for small projects, there are some limitations preventing
from achieving that goal in complex projects.

The section ``Fundamentals" provides the basic concepts
necessary to understand those limitations. A preliminary study carried on four big 
open source projects will be presented in section ``Problem Analysis". It shows that DSE-based
tools face two main problems:
\begin{itemize}
\item Object-creation Problem (OCP)
\item External-Method-Call Problem (EMCP)
\end{itemize}

The section ``Approach" explains in detail how Covana uses the results obtained by
the DSE-based tool, a set of test-cases and a set of problem candidates, to prune irrelevant 
problems. Covana uses a systematic strategy tailored for each of the two 
problems.

The section ``Evaluation" presents the results obtained by applying Covana to
two big open-source projects.

Finally, we conclude showing some related works in the field and a conclusion. 
%MODIFY: Maybe mention something about contributions
%MODIFY: Maybe write something about related works

\section{Fundamentals}
The purpose of Covana is to increase the structural coverage of code, which is a metric 
to assess the thoroughness of testing. Structural Coverage is based on a set of criteria
but in this work only two of them will be considered\footnote{Other criteria such as path coverage or 
conditional operand coverage were not considered in the original work of Xushen Xiao, Tao Xie, et al.}:
\begin{itemize}
\item \textbf{Statement Coverage}: s/S, where\\
s = number of statements executed at least once\\
S = total number of executable statements
\item \textbf{Branch Coverage}: d/D, where\\
d = number of decision outcomes evaluated at least once\\
D = total number of decision outcomes
\end{itemize}
DSE-based tools generate test-cases for PUTs, which generalize unit tests by
accepting parameters. A test-case is every set of parameters used to instantiate the PUT 
and can be seen as a set of conditions that carry a software system to a 
correct or failure state. PUTs allow doing automatic case analysis, which avoids writing
implementation-specific unit tests \cite{put}.

The purpose role of Automated Test Generation Tools is to generate (a minimal 
number of) test-cases per PUT. One strategy consists in combining concrete and
symbolic execution. Dynamic Symbolic Execution (DSE) was originally presented in 2004 \cite{dse2004}.

The conceptual diagram in figure \ref{condse} shows how DSE works.
DSE accepts a program P as input and returns a set of test-cases.
The first step consists in identifying the input variables from the variable set of P.
DSE interleaves concrete and symbolic execution. It starts with a concrete execution of P
with arbitrary input. At each branch point, the concrete execution takes one path.
At the end of the execution, the path conditions that lead the execution together with the errors which were eventually found 
are saved into another subsystem, e.g. a log system.
The symbolic execution follows the same path of the last concrete execution. Instead of using
concrete values, input variables are assigned symbolic values. DSE keeps track of them in a
symbolic state. At the end of the execution symbolic constraints are gathered.

The aim is to visit a new path in the next concrete execution. The path will be like the previous one, 
except that in the last decision point, the not-yet visited path will be visited 
(leading to a depth-search strategy).
To achieve this, the information gathered during the symbolic execution, together with the negated last
path condition are given to a constraint solver which returns the input variables that will be used in the 
next concrete execution. 
The overall process is repeated until it is not possible to visit a new path. The sets of input variables used by
concrete executions are the set of test-cases returned by DSE.

If DSE were applied on sample code \ref{dse} \cite{wiki}, the returned test-cases would allow exploring the complete code.
Conceptually, DSE generates an execution path tree as shown in fig. \ref{executionPathTree}.

\newpage
\begin{center}
\lstinputlisting[caption={sample code in Java}, label=dse]{code/dseExample.java}
\end{center}

\begin{figure}[htb]
\label{executionPathTree}
\centering
  \includegraphics[scale=1.2]{pics/paths.eps}
  \caption{execution path tree for code \ref{dse}}
\end{figure}


\section{Problem Analysis}

\begin{table*}
\centering
\caption{Main problems for not-covered branches in four open source projects}
\label{analysis}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
\textbf{Project}&\textbf{LOC}&\textbf{Cov\%}&\textbf{OCP}&\textbf{EMCP}&\textbf{Boundary}&\textbf{Limitation}\\ \hline
SvnBridge    & 17.1K & 56.26 & 11 (42.31\%)  & 15 (57.69\%)  &  0 (0\%)        & 0 (0\%)\\ \hline
xUnit           & 11.4K & 15.54 & 8 (72.73\%)    &  3 (27.27\%)  &  0 (0\%)        & 0 (0\%)\\ \hline
Math.Net     & 3.5K   & 62.84 & 17 (70.83\%)  &  1 (4.17\%)    &  4 (16.67\%)  & 2 (8.33\%)\\ \hline
QuickGraph & 8.3K   & 53.21 & 10 (100\%)     &  0 (0\%)         &  0 (0\%)        & 0 (0\%)\\ \hline
\textbf{TOTAL}        & \textbf{40.3K} & \textbf{49.87} & \textbf{46 (64.79\%)}  & \textbf{19 (26.76\%)}  &  \textbf{4 (5.63\%)}    & \textbf{2 (8.32\%)}\\ \hline
\end{tabular}
\end{table*}

Although DSE achieves high structural coverage in small projects, it has in practice a few limitations which
prevents the tool from achieving this goal. A study was conducted applying a DSE-based tool called PEX \cite{pex} to
four active open-source projects:
\begin{enumerate}
\item SvnBridge \cite{svnBridge}
\item xUnit \cite{xUnit}
\item Math.Net \cite{math}
\item QuickGraph \cite{quickGraph}
\end{enumerate}
The strategy was the following:
\begin{enumerate}
\item PEX was applied until it successfully covered all methods or it ran out of memory.
\item After PEX generated the test-cases and coverage files, 10 source files with the lowest coverage were selected for each project.
\item The problems in not-covered branches were analyzed\footnote{Further details available at 
\\http://research.csc.ncsu.edu/ase/projects/covana/}.
\end{enumerate}

Table \ref{analysis} shows the obtained results. The first column contains the name of the project;
the second one reports the lines of code (LOC);
the third column shows the achieved statement coverage. The remaining four columns report the
number of not-covered branches caused by the problems indicated at the header of each column.

The most recurrent problem, with 64.79\%,  is the so called Object creation problem (OCP), followed 
by the external-method call problem (EMCP), with 26.76\%. These problems are 
addressed by Covana.
The others were not treated because they are related to weaknesses of some DSE components,
such as the constraint solver.

\subsection{Object-creation Problem (OCP)}
The Object-creation Problem appears whenever DSE is not able to produce desiderable object
states. This might happen when one method in the sequence of calls is not accessible, 
e.g. due to its visibility.
\begin{center}
\lstinputlisting[caption={Java code illustrating OCP}, label=ocp]{code/ocpExample.java}
\end{center}
Code \ref{ocp} illustrates the problem.
The class \texttt{FixedSizeStack} contains a inner private object {\it stack} of type \texttt{Stack}. \texttt{Stack} has a field {\it items} 
of  type \texttt{List<object>} and its method \texttt{count()} returns the number of stored objects in {\it items}.

\texttt{FixedSizeStack} fixes an upper bound to the number of objects that can be pushed.
The method \texttt{Push} pushes  an object to the stack, but also checks that stack doesn't violate this constraint by
throwing an exception whenever the bound is reached.

Let's assume that the method \texttt{TestPush} is a parameterized unit test used to cover the method \texttt{Push}. 
The method accepts a \texttt{FixedSizeStack} object and an object to be pushed. In order to cover line 9 and throw 
the exception,  a \texttt{FixedSizeStack} object, whose inner stack contains already 10 objects, is needed.
If the methods to modify the state of a \texttt{Stack} object were no accessible by the DSE, then it wouldn't be
possible to cover line 9.

Important here is to notice that it is possible to build the field declaration hierarchy up to the point where
it is not possible to call the proper method. In this case the hierarchy consists of \texttt{FixedSizeStack},
\texttt{FixedSizeStack.stack} and \texttt{Stack.items}. 

\newpage
\subsection{External-Method Call Problem (EMCP)}
This problem appears whenever the program under test contains calls to external-methods and one of the
two following conditions is verified:
\begin{center}
\lstinputlisting[caption={Java code illustrating EMCP}, label=emcp]{code/emcpExample.java}
\end{center}

\begin{enumerate}
\item The returned value of the external-method is assigned variables, which are used in decision points 
(if or while predicates)
\item The external-method throws an exception that prevents from covering subsequent code.
\end{enumerate} 
Code \ref{emcp} illustrates the two cases.
In line 3 the returned value of \texttt{File.Exists} is used to decide the branch
to take. A symbolic execution's step of the DSE will identify the dependence between {\it configFilename} 
and the string {\it assemblyFile}, which is also the program input. It creates then a symbolic constraint 
which is then given to the constraint solver. Unfortunately, the best it can do is creating an arbitrary string. The 
system is not able to understand the semantic of the method and to generate a name which represents a
file name present in the environment. Because of this, line 4 won't be covered.

Line 10 shows an example of the second kind of EMCP. Assume that the method \texttt{Path.GetFullPath} throws
an exception if the string argument {\it assemblyFilename} is not a valid file name. Again, DSE cannot
generate a meaningful string and lines from 11 on remain uncovered.

Not every external-method call causes a lower structural coverage. Lines 17 and 18 use the static method
String.Format, but they appear neither inside an if statement nor will throw an exception because DSE will
be able to provide an assignment for  {\it actual} which covers both statements.


\subsection{Other problems}
OCP and EMCP are not the only problems.
One of them, called ``Boundary Problem", appears whenever
the number of iterations of a loop depends on symbolic values.
In this case DSE tries to explore all the possible paths systematically, preventing from exploring the
code after the loop.

The last identified problem, shown in the column ``Limitation" of table \ref{analysis} regards the
incapability of the constraint solver to provide values for the concretes executions. This might
happen for example when program inputs require floating-point arithmetic for the calculation.
The constraint solver might introduce approximation errors and prevent from covering some paths.

\section{Approach}
In this section we present the offered solution (Covana), used to overcome previously mentioned problems. Covana is a three-stages-approach used to identify relevant EMCPs  and OCPs preventing automated test-generation tools from achieving high structural coverage. The following parts of this section give an overview of the approach, as well as details about applying the three stages (Problem-Candidates Identification, Forward Symbolic Execution and \newline Data-Dependence Analysis) along the identification process.
\subsection{Overview}
Covana's main goal is to help developers in localizing code fragments which prevent ATGTs from achieving high structural coverage. Covana takes as input, the program under test or the parameterized unit test, as well as the generated test inputs. By applying three consecutive stages once, Covana can report the relevant problems belonging to both categories EMCP and OCP.

\begin{figure*}[htb]
\centering
\includegraphics[width=0.6\textwidth]{pics/CovanaConceptual.eps}
\caption{Covana conceptual diagram}
\label{Covanadiagram}
\end{figure*}

Figure \ref{Covanadiagram} depicts the three stages of Covana: problem candidates identification is the first stage, where symbolic values are assigned to the generated test inputs; afterwards the Dynamic Symbolic Execution (DSE) engine makes a forward symbolic execution on those symbolic inputs. During execution the DSE engine throws runtime  events (method exit, method entry) helping Covana in identifying potential problem sources which are the problem candidates. Here begins the second stage, forward symbolic execution where the problem candidates are further analyzed by assigning symbolic values to their elements and applying a forward symbolic execution on them again. The outputs of this stage are: symbolic expressions of problem candidates and exceptions catched through the forward symbolic execution. The third stage, data-dependence analysis, takes outputs from previous stages and computes data-dependencies of partially-covered branch statements on each of the problem candidates. Whenever a partially-covered branch statement is data-dependent on a problem candidate, Covana considers this candidate as a problem and reports it. Following sections explain in details how these stages identify EMCPs as well as OCPs.
 
\subsection{EMCP identification}
There are two main factors for this problem: the first one is when an external-method is invoked from the program under test. External in this context implies that the source code of such methods are not available to be examined. Therefore, ATGTs cannot manipulate their arguments in order to force certain returned values. Those methods exist in binary libraries and to discover such calls, method instrumentation information is required. The second factor is that the return values of those external-method calls are used in the predicates of branch statements. Since the tools don't have any influence on returned values of external-method calls, they are unlikely to generate test inputs, forcing the branch statement to cover the uncovered branch.
\subsubsection{EMCP-candidates identification}
DSE engine triggers an event called method exit when the method call finishes execution. Equipped with instrumentation information, the method exit event is an ideal solution for EMCP candidates' discovery. During the forward symbolic execution in the first stage, Covana monitors runtime events thrown by the DSE engine. To identify EMCP candidates, Covana catches method exit events and checks whether the method is instrumented by DSE or not by examining the method instrumentation information equipped with the event. Covana considers a method call as an EMCP candidate if it is not instrumented by DSE. 
However not all the external-method calls are EMCPs, there is a noticeable number of problem candidates that can be pruned in the first stage such as external-method calls which have constant arguments. Those calls do not play any role in preventing ATGTs from achieving higher structural coverage because they are totally independent form the program's inputs. Therefore, only external-method calls which have data-dependencies on program inputs are considered as candidates. After Covana has assigned symbolic values to program inputs, it can check whether the  method arguments contain symbolic expressions of program inputs and if they do, then the method arguments are data-dependent on program inputs.

\subsubsection{Forward Symbolic Execution}
This stage has two main goals in terms of identifying EMCP's: 
\begin{itemize}
\item Finding the branch statements which are data-dependent on problem candidates, because in this case the problem candidate is more likely to be a real problem originator; and
\item Catching uncaught exception caused by external-method calls since those exceptions prevent ATGTs from discovering the remaining parts of the program code in subsequent lines from the calling point on. 
\end{itemize}
In order to accomplish the first goal, Covana assigns symbolic values to elements of problem candidates and tries to find symbolic expressions on elements of problem candidates in the predicates of the branch statements. All candidates on which some branch statements have data-dependency are forwarded to the final stage to be further examined.\\
On the other hand, catching runtime exception requires execution. Therefore, Covana uses the generated test inputs (coming from the ATGT) along with the assigned symbolic values to elements of the identified problem candidates in order to perform forward symbolic execution and monitor the execution. During execution Covana collects all thrown exceptions and keeps them along with their stack trace.

\subsubsection{Data-Dependence Analysis}
Collected information is finally analyzed and the relevant EMCPs are reported as a result of this stage. Covana examines information about method calls in the stack traces previously gathered and looks for external-method calls which have thrown exception and have caused the remaining code not to be covered. Such external-method calls are directly reported as a relevant EMCP.\\
The previous stage forwarded a collection of external-method calls, whereas Covana discovered some partially-covered \newline branch statements are data-dependent on the returned value of those external-method calls (line 3 in Code \ref{emcp} is an example). Such external-method calls are the other category of EMCPs reported by Covana.

\begin{table*}
\centering
\caption{Evaluation results of Covana's efficiency in identifying EMCP and OCP}
\label{evaluation}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|} \hline
\multicolumn{2}{|c|}{}  & \multicolumn{4}{c|}{\textbf{OCP}} & \multicolumn{4}{c|}{\textbf{EMCP}}\\ \hline
\textbf{Application's package}&\textbf{\#File}&\textbf{\#Identif.}&\textbf{\#Real}&\textbf{\#FP}&\textbf{\#FN}&\textbf{\#Identif.}& \textbf{\#Real}&\textbf{\#FP} &\textbf{\#FN}\\ \hline
xUnit                                        & 71 & 68  & 67  & 13  &  12  & 24 & 24 & 0 & 0\\ \hline
xUnit.Extensions                        & 17 &  7   &   5  &   3  &   1   &  2  &  2 & 0 & 0\\ \hline
xUnit.Console                            &  7 &  2   &   2  &   0  &   0   &  2  &  2 & 0 & 0\\ \hline
xUnit.Gui                                   & 12 &  3  &   3  &   0  &   0   &  1  &  3 & 0 & 2\\ \hline
xUnit.Runner.Msbuild                  &  6 &  15 &  14  &   1  &   0   &  0  &  0 & 0 & 0\\ \hline
xUnit.Runner.Tdnet                    &  3 &  5   &   5  &   0  &   0  &  1  &  1 & 0 & 0\\ \hline
xUnit.Runner.Utility                     & 28 &  7  &  12  &   0  &   5   &  9  &  9 & 0 & 0\\ \hline
Quickgraph                               &  3 &  0   &  0    &   0  &   0  &  0  &  0 & 0 & 0\\ \hline
Quickgraph.Algorithms               & 12 &  7  &   11  &   0  &   4   &  0  &  0 & 0 & 0\\ \hline
Quickgraph.Algorithms.Graphviz  & 14 & 20 &   20  &   2  &   2   &  4  &  3 & 1 & 0\\ \hline
Quickgraph.Collections               & 19 &  6  &   11  &   1  &   6   &  0  &  0 & 0 & 0\\ \hline
Quickgraph.Concepts                 & 35 &  5  &   5  &   0  &   0   &  0  &  0 & 0 & 0\\ \hline
Quickgraph.Exceptions               &  3 &  0   &   0  &   0  &   0   &  0  &  0 & 0 & 0\\ \hline
Quickgraph.Predicates               &  9 &  8   &   8  &   0  &   0   &  0  &  0 & 0 & 0\\ \hline
Quickgraph.Representations       &  3 &  2   &   2  &   0  &   0   &  0  &  0 & 0 & 2\\ \hline
\textbf{TOTAL}                         & \textbf{242} & \textbf{155} & \textbf{165}  &   \textbf{20}  &   \textbf{30}   &  \textbf{43}  &  \textbf{44} & \textbf{1} & \textbf{2}\\ \hline
\end{tabular}
\end{table*}

\begin{table*}
\centering
\caption{Evaluation results of Covana's efficiency in pruning irrelevant problems}
\label{evaluation2}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|} \hline
  & \multicolumn{5}{c|}{\textbf{OCP}} & \multicolumn{5}{c|}{\textbf{EMCP}}\\ \hline
\textbf{Applicat.}&\textbf{\#Cand.}&\textbf{\#Id.}&\textbf{\#Pruned}&\textbf{\#FP}&\textbf{\#FN}&\textbf{\#Cand.}&\textbf{\#Id.}& \textbf{\#Pruned}&\textbf{\#FP} &\textbf{\#FN}\\ \hline
xUnit                                        & 335 & 107  & 228 (68.06\%)  & 17  &  18  & 1313 & 39 & 1274 (97.03\%) & 0 & 2\\ \hline
QuickGraph                               &  116 &  48   &  68 (58.62\%)    &   3  &   12  &  297  &  4 & 293 (98.65\%) & 1 & 0\\ \hline
\textbf{TOTAL}                         & \textbf{451} & \textbf{155} & \textbf{296 (65.63\%)}  &   \textbf{20}  &   \textbf{30}   &  \textbf{1610}  &  \textbf{43} & \textbf{1567 (97.33\%)} & \textbf{1} & \textbf{2}\\ \hline
\end{tabular}
\end{table*}

\subsection{OCP Identification}
OCPs appear when the ATGTs fail to produce all the states of an object, which are suitable to match both, the condition and the negated condition of branch statements. Covana identifies such problems through the following stages.
\subsubsection{OCP-Candidates Identification}
Candidates are non-primitive program inputs and their non-primitive fields or data members. As mentioned before, in this stage the DSE executes the program under test using the generated test inputs symbolically and Covana monitors the execution. DSE again triggers an event called method entry whenever a method in the program under test is called. This event is useful to identify OCP candidates because on one hand, it gives information about method parameters and on the other hand it plays as a marker of using program inputs in the program under test. When method entry event is triggered, Covana checks its arguments and chooses non-primitive program inputs and their fields as candidates and forwards them to the next stage.
\subsubsection{Forward Symbolic Execution}
In this stage Covana assigns symbolic values to elements of problem candidates and uses the generated test inputs to perform forward symbolic execution. As OCP requires non-primitive program inputs to be used in branch statements' predicates, Covana searches for partially covered branch statements whose predicates have data-dependency on program inputs or fields of program inputs. This is done by looking for symbolic expressions of problem candidates in the predicates of the uncovered branch statements.
\subsubsection{Data-Dependency Analysis}
Finally, all the results from the second stage are analyzed to find the sources of the OCPs. First, all non-primitive program inputs on which some partially-covered branch statements are data-dependent, are directly reported as an OCP. However, if a partially-covered branch statements is data-dependent on any data member of program inputs, then the exact fields which cause the problem should be identified and reported. Therefore, Covana builds a field declaration hierarchy and applies an algorithm called Object Creation Problem Analysis over that hierarchy in order to identify those problems.
A field declaration hierarchy is a structure which reflects the composition hierarchy of program inputs and can be built using symbolic expressions on program inputs of partially covered branch statements (which are the results of the second stage). Figure \ref{pathconditions} shows how we can extract field hierarchy from path conditions of line 8 in code \ref{ocp}.
\begin{figure}[htb]
\centering
  \includegraphics[scale=0.6]{pics/hierarchy.eps}
  \caption{path conditions}
  \label{pathconditions}
\end{figure}

Algorithm shown in Code \ref{algo} is the object creation problem analysis algorithm, which takes as input \texttt{Fields} which is the extracted field declaration hierarchy and \texttt{Branch} which is the non covered branch and returns the OCP as an output.
\begin{center}
\lstinputlisting[caption={Object Creation Problem Analysis }, label=algo]{code/Algorithm.java}
\end{center}
Line 5 checks if the field declaration hierarchy contains only one element, then this element is the program input itself and the algorithm reports it as an OCP. Otherwise, the algorithm further examines all the fields. Method \texttt{IsAssignable} in line 13 checks whether the field is assignable for its declaring class, meaning if the declaring class has a constructor or a public setter method which provides the functionality of creating new objects. Such functionality can be used by ATGTs to create objects of the field's class type. Satisfying condition in line 14 means that DSE cannot find a constructor or a public setter in the declaring class in order to create objects of the current field. Therefore, the algorithm reports the declaring class as an OCP. By iterating over all the elements of field hierarchy the algorithm finds the fields which prevent DSE from achieving higher structural coverage.

%ACKNOWLEDGMENTS are optional
\section{Evaluation}
In this section we will present the results of the two evalutations of Covana.
The two subjects of evaluation were:
\begin{enumerate}
\item \textbf{xUnit} (11.4 KLOC) \cite{xUnit}:  a unit testing framework for development with .NET technologies.
\item \textbf{QuickGraph} (8.3 KLOC)  \cite{quickGraph}: a C\# graph library which provides data structures and
graph algorithms.
\end{enumerate}
The subjects were evaluated with PEX (v.0.24.50222.1), a Microsoft DSE-based test-generation tool.
The achieved coverage as well as the collected run-time information resulting from the execution of
the tool, is the input for Covana.

The measurement of the effectiveness of Covana is based on two criteria that will be presented 
separately.  
In the following sections we refer to:
\begin{itemize}
\item \textbf{False positives (FP)}: irrelevant problem candidates which are not pruned by Covana.
\item  \textbf{False negatives (FN)}: relevant problems which are pruned by Covana.
\end{itemize}
The measurement of the effectiveness of Covana is based on two criteria that will be presented 
separately. 

\newpage
\subsection{Problem identification}
In this section, we will see how effective Covana was in identifying EMCPs and OCPs.
To achieve this, the problems reported by the tool were manually classified as real problem, false
positives and false negatives.
If a problem was identified as an EMCP candidate, a mock object was manually built and PEX was
reapplied.
Analogously, if a problem was identified as an OCP candidate, the PUT was modified to include the
sequence of method calls necessary to build the object hierarchy.
In both cases, if the not-covered branches were covered, i.e. the coverage increased,
the problem was then classified as relevant and irrelevant otherwise.

Table \ref{evaluation} reports the results concerning ``problem identification". Column ``File\#" shows
the number of source file on each package. The two main groups of columns, ``OCP" and ``EMCP",
show respectively the statistics for each of the problems. Column ``Real" gives the number of 
problems (manually checked). Column ``Identif." (Identified) reports the number of problems
identified by Covana. ``FP" and ``FN" are respectively false positive and false negative problems.

The results show that Covana identifies 155 OCPs with 20 false positives and 30 false negatives.
This is due to the limitation of Covana with respect to static fields of 
classes. The results of
EMCP are even better. Covana identifies 43 EMCPs with 1 false positive and 2 false negatives.

\subsection{Irrelevant-Problem-Candidate Pruning}
In this section, we will see how effectively Covana prunes irrelevant problem candidates. The 
percentage of pruned problems is calculated from the number of identified problem candidates and 
the number of problems reported by Covana. False positive and false negatives, which are again
manually identified, complete the picture about Covana's efficiency. 

Table \ref{evaluation2} reports the results concerning the pruning's efficiency. The column ``Cand"
reports the overall number of problem candidates. ``Id" (Identified) shows the number of problems
identified by Covana. The meaning of ``FP" and ``FN" is the same as in the previous table.

The best results are here obtained in pruning EMCPs (97.33\%) with only 1 false positive and 2 false
negatives. Pruning of irrelevant OCPs reaches a 65.33\% with 20 false positives and 30 false
negatives.

\section{Related Works}

The most related work with respect to Covana is the approach of Saswat Anand, Alessandro Orso, et al. \cite{rw1}, who addressed the same problem. Using static analysis they detect parts of the program in which symbolic execution might fail, e.g. when the constraints may be of a type not handled by the underlying decision procedure (similar to OCP), or when some parts contain third-part libraries (similar to EMCP). They developed a tool (Stinger) which integrates with Java Pathfinder. Their approach for EMCP is limited to the identification of external-method calls which receive symbolic values as arguments. However, Covana also determines when partially-covered branch statements has a data dependencies on returned values external-method calls, providing then a bigger number of real problems.

Pavlopoulou and Young \cite{rw2} proposed a tool written in Java to analyze the residual coverage of deployed software. However, only the performance aspect of residual test coverage monitoring has been investigated.
A more generic approach from Dincklage and Diwan \cite{rw3} consists in defining a new language that captures parts of programs and analyzes it. When a bug is found by an external tool, e.g. Eclipse, this information can be integrated with it adding the reason for the failure, so that a developer can decide the bugs are worth solving. However, the goal of Covana is different, because it aims to increase structural coverage.


\section{Conclusion}
In this paper we presented an approach (Covana) which helps developers in localizing code positions, which prevent automated test generation tools from achieving high structural coverage. Based on our preliminary study, two main problems (EMCPs and OCPs) are taken into consideration, along with a technique to discover them based on symbolic execution and data-dependency of partially covered branch statements on program inputs. Finally, we assessed the effeciency of Covana in identifying the problems by evaluating it on two projects.
%
% The following two commands are all you need in the
% initial runs of your .tex file to
% produce the bibliography for the citations in your paper.
\bibliographystyle{abbrv}
\bibliography{sigproc}  
\nocite{*}
% sigproc.bib is the name of the Bibliography in this case
% You must have a proper ".bib" file
%  and remember to run:
% latex bibtex latex latex
% to resolve all references
%
% ACM needs 'a single self-contained file'!
%
%APPENDICES are optional
%\balancecolumns
\end{document}
