%\documentclass{acm_proc_article-sp}
\documentclass{sig-alternate}
 \usepackage{color}

 \usepackage{relsize}
 \usepackage{fancyvrb}
\newcommand{\CodeIn}[1]{{\small\texttt{#1}}}

\begin{document}

\conferenceinfo{DEFECTS'09,}{July 19, 2009, Chicago, Illinois, USA.}
\CopyrightYear{2009}
\crdata{978-1-60558-654-0/09/07}

\title{Property Based Coverage Criterion }

%
% You need the command \numberofauthors to handle the 'placement
% and alignment' of the authors beneath the title.
%
% For aesthetic reasons, we recommend 'three authors at a time'
% i.e. three 'name/affiliation blocks' be placed beneath the title.
%
% NOTE: You are NOT restricted in how many 'rows' of
% "name/affiliations" may appear. We just ask that you restrict
% the number of 'columns' to three.
%
% Because of the available 'opening page real-estate'
% we ask you to refrain from putting more than six authors
% (two rows with three columns) beneath the article title.
% More than six makes the first-page appear very cluttered indeed.
%
% Use the \alignauthor commands to handle the names
% and affiliations for an 'aesthetic maximum' of six authors.
% Add names, affiliations, addresses for
% the seventh etc. author(s) as the argument for the
% \additionalauthors command.
% These 'additional authors' will be output/set for you
% without further effort on your part as the last section in
% the body of your article BEFORE References or any Appendices.

\numberofauthors{2} %  in this sample file, there are a *total*
% of EIGHT authors. SIX appear on the 'first-page' (for formatting
% reasons) and the remaining two appear in the \additionalauthors section.
%
\author{
\alignauthor
Fadi Zaraket\\
       \affaddr{Electrical and Computer Engineering}\\
       \affaddr{American University of Beirut}\\
       \email{fz11@aub.edu.lb}
% 2nd. author
\alignauthor
Wes Masri\\
       \affaddr{Electrical and Computer Engineering}\\
       \affaddr{American University of Beirut}\\
       \email{wm13@aub.edu.lb}
}

\maketitle

\begin{abstract}
Coverage metrics answer the question of whether 
we adequately checked a given software artifact. 
For example, statement coverage metrics measure how many 
and how often lines of code were executed.
Path coverage metrics measure the frequency of execution 
of interleaving branches of code.
In recent years, researchers have introduced several
effective static analysis techniques for
checking software artifacts. 
Consequently, more and more developers started embedding 
properties in code. 
Also, some techniques and tools emerged that automatically
infer system properties where they do not explicitly exist. 
We hypothesize that it is often more effective to evaluate
test suites based on their coverage of system properties than
than of structural program elements.
In this paper, we present a novel coverage  criterion and
metrics that evaluate test cases with respect to their coverage 
of properties,
and measure the completeness of the properties 
themselves.

\end{abstract}

% A category with the (minimum) three required fields
\category{D.2.5}{Software}{Testing and debugging}

%A category including the fourth, optional field follows...
\category{D.2.8}{Software Engineering}{Metrics}[coverage measures]

\terms{Verification, theory, measurement}

\keywords{Coverage metrics, properties, don't care detection}

\section{Introduction}

Given a software artifact, 
a {\it test suite} is a collection of {\it test cases},
i.e. input valuations for all input variables,
each with an {\it oracle}.
An oracle characterizes
the expected behavior of the artifact on a given input.
Regression test suites aim at assisting developers avoid 
introducing defects that have already been fixed. 
Typically, they induce the basic functionalities
of the system under test,
%In many software applications, regression suites 
%form the main testing mechanism to decide whether the
%modified artifact is suitable for shipping. 
and with time, grow 
to become the main testing mechanism of the 
artifact and thus play a more prominent 
role than that of avoiding regression defects.

Coverage metrics measure the adequacy and completeness
of a test suite. 
{\it All-path coverage}~\cite{zhu1997} requires the 
existence of at least one test case for every path of 
execution of an artifact. 
Researchers proposed weaker forms of the all-path 
criterion and introduced limits on the length of the 
paths and the number of loop iterations~\cite{Cadar06exe,godefroit05dart,cuteSen2005,williams05pathcrawler,xu06data}.
{\it Statement coverage} is a criterion where each
statement is required to be executed at least once
by at least one test case~\cite{zhu1997}.


Several static analysis techniques have been presented recently
to
address the verification of software
programs~\cite{cbmcCKL2004,HolzSpin97,jpfVisser03visser,kodkodTJ2007}.
With the emergence of these techniques and associated tools,
it became more
common for software artifacts to include formal
properties that describe the expected behavior of the 
code embedded in several forms such as assertions
and annotations.
Examples of such properties are safety properties
(null pointer and array boundary checks), 
assume-guarantee properties 
(pre-conditions and post-conditions), 
invariants (data structure or loop invariants), 
as well as user assert statements.
In the absence of these properties, developers can use
existing tools capable of inferring them~\cite{yang04}.

We view a property as a logical expression
over a set of variables selected from the
artifact and that is assumed to be true in place
and context.
The state space of a property is thus all
the possible values its variables can take.

In this paper, given an artifact with a set
of properties therein and a test suite, 
we propose a novel property based coverage metric.
The metric measures how many properties are
tested and how much of the state space of these 
properties is covered by the tests. 
We also propose a mechanism to measure the 
completeness of a given set of properties relative
to the variables referenced in the properties.
The mechanism can also highlight cases where the 
developer probably forgot to claim a behavior 
for the code.

%The rest of this paper is structured as follows.

\begin{table}[tb]
\centering
\begin{tabular} {p{5.2cm}p{4.5cm}}
\begin{Verbatim}[fontsize=\relsize{-1},numbers=left,numbersep=4pt, 
frame=topline,framesep=4mm,label=\fbox{Decimal function}]
char powers[8];

int decimal(char * binStr) {
//int a;
  int rc = 0;
  for (int i = 0; i < 8; i++) 
    rc += (binStr[i] - '0')*powers[i];
//assert ( 
//  !(0<= a && a < 8 && binStr[a] == '1') 
//  || rc >= powers[a]);
  return rc; }

int main(int argc, char ** argv) {
  if (argc < 2)
    return;

  for (int i = 0; i < 8; i++)
    powers[i] = 1 << (7 - i);

  for (int i = 1; i < argc; i++)
    decimal(argv[i]);
  return 0; }
\end{Verbatim} 
& \\
\end{tabular}
\caption{A binary to decimal conversion function}
\label{t:decimal}
\end{table}

\section{Motivating example}
\label{s:motivation}

The example in Table~\ref{t:decimal} depicts
a C++ function that takes a binary string
and computes its decimal conversion~\cite{masri2009}. 
The function \CodeIn{decimal()} on line 3
uses the \CodeIn{powers}
array to perform the conversion.

The function \CodeIn{main()} acts as a test driver
for \CodeIn{decimal()}. 
It initializes the 
\CodeIn{powers} array and then calls \CodeIn{decimal()} 
for every binary string that was passed on the command 
line. 
Due to the type of \CodeIn{powers}, an overflow occurs 
at line 18 when \CodeIn{i} is 0, and \CodeIn{powers[0]} 
is assigned the value \CodeIn{-128}.

Our test suite is an invocation of the program 
with four input strings ``00010111'', ``01010101'',
``01101101'',  and ``01111110''.
These input strings do satisfy the path coverage
and the statement coverage criteria, however, they 
do not trigger the failure. 

The commented \CodeIn{assert} statement on lines 8,
9, and 10 claim that
$(0\leq a< 8 \wedge \textit{binStr}[a] = '1')
\mapsto rc < \textit{powers}[a]$.
If we use the state space of the property 
as our coverage metric, we will need to extend the
test suite to include strings such that for every 
position $0\leq a < 8$ there is a string in our 
test suite with a '1' in position $a$. 

\section{Property based coverage }
\label{s:propcov}

The property all-state space coverage fixes the
problem of the \CodeIn{decimal} example. 
However, in practice, the state space of a set
of properties can be very large. 

We call the smallest subexpression in a property
that evaluates to a Boolean value, a term. 
Consequently, the \CodeIn{assert} statement 
becomes $\neg(t_0 \wedge t_1 \wedge t_2) \vee t_3$
where $t_0$ corresponds to $0 \leq a$, $t_1$ corresponds 
to $a < 8$, $t_2$ corresponds to 
$\textit{binStr}[a] = '1'$, and $t_3$ corresponds to 
$\textit{rc} \ge \textit{powers}[a]$.

A test suite fulfills a property term coverage 
criterion if it has test cases that induce all
possible permutations of term valuations.  

\section{Property completeness }
\label{s:complete}

One might argue that current software code bases
are seldom annotated with properties.
We address the problem by proposing a methodology
to detect incomplete property sets, and suggest 
extending them.

Given a set of properties, we compute the set of
input valuations that the property does not account
for. That set of input valuations is called the 
{\it don't care set} (DC).
We use DC detection techniques~\cite{ZhuKu06SATSweepODC,FuYuMalik2005} 
to compute 
the don't care valuations of property terms. 
We then propose to the developer that he/she should 
either
add new properties that claim a behavior over these
valuations, modify the current properties to include
the valuations, or ignore the don't care sets. 

\section{Summary}
\label{s:conclusion}
In this paper, we proposed a novel property
based coverage criterion for evaluating test
suites. 
We also proposed the use of don't care detection
techniques to evaluate the completeness of a
property set and to highlight cases where 
the porperty set can be extended.
In the future, we plan to explore the 
criterion on real life systems and to compare
it against existing coverage criteria.

\bibliographystyle{abbrv}
\bibliography{my}  

\end{document}
