\documentclass[10pt, conference, compsocconf]{IEEEtran}
\usepackage{fancyvrb}
\usepackage{relsize}
\usepackage{multirow}
\usepackage{hyperref}

\newcommand{\CodeIn}[1]{{\small\texttt{#1}}}
\usepackage[cmex10]{amsmath}

% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}

\begin{document}
\title{Test Suite Evaluation Based on Reachability Analysis of Property State Space} 


% author names and affiliations
% use a multiple column layout for up to two different
% affiliations

\author{\IEEEauthorblockN{Kassem Fawaz, Fadi Zaraket, Hamza Harkous, Wes Masri}
\IEEEauthorblockA{%Electrical and Computer Engineering Department\\
American University of Beirut\\
%Beirut, Lebanon\\
Email: \{kmf04, fz11, hhh20, wm13\}@aub.edu.lb}}

\maketitle

\begin{abstract}
Coverage metrics answer the question of whether we adequately
checked a given software program. For example,
statement coverage metrics measure how many and how often
lines of code were executed. Path coverage metrics measure
the frequency of execution of interleaving branches of
code. Data flow coverage metrics compute computational
and predicate use of data paths. 
In recent years, researchers introduced 
effective static analysis techniques for checking the correctness 
of software programs.
Consequently, we started to see more formal properties 
embedded in software programs.
In this paper, we present a novel coverage criterion and metrics
that evaluate test suites based on their coverage
of program properties rather than of structural program
elements. 
We consider properties expressed as propositional and 
temporal formulas with several predicates. 
We measure the property states covered by a given test suite 
against an approximation of the reachable state
space of the property.
Our results show that property coverage can reveal deficiencies 
in test suites where other coverage techniques can
not. Our approach can also suggest test cases 
that completes the test suite and may uncover bugs.
\end{abstract}

\begin{IEEEkeywords}
%component; formatting; style; styling;
Coverage criterion; fault localization; static analysis; software testing; properties
\end{IEEEkeywords}


% For peer review papers, you can put extra information on the cover
% page as needed:
% \ifCLASSOPTIONpeerreview
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
% \fi
%
% For peerreview papers, this IEEEtran command inserts a page break and
% creates the second title. It will be ignored for other modes.
\IEEEpeerreviewmaketitle

\section{Introduction}
% no \IEEEPARstart

%A {\em test case} is an evaluation of all input
%variables of a software program across the execution time
%of the program with an {\it oracle}. 
%An oracle expresses the expected behavior of the program
%on the given input. 
A {\em test suite} is a collection of 
{\it test cases} that check the functionality of an associated
software program.
{\em Regression} test suites assist developers to avoid 
introducing defects that have already been fixed. 
They include test cases that check the basic functionality
of the program as well as test cases that induce all 
previous bugs that have been fixed.
With time, regression suites grow to 
form the main testing mechanism to decide whether the
program is suitable for production. 

Coverage metrics measure the adequacy and completeness
of a test suite. 
{\it All-path coverage}~\cite{zhu1997} requires the 
existence of at least one test case for every path of 
execution of a program.
Researchers proposed weaker forms of the all-path 
criterion and introduced limits on the length of the 
paths and the number of loop iterations~\cite{Cadar06exe,godefroit05dart,cuteSen2005,williams05pathcrawler,xu06data}.
{\it Statement coverage} is a criterion where each
statement is required to be executed at least once~\cite{zhu1997}.

Static analysis 
techniques~\cite{cbmcCKL2004,HolzSpin97,jpfVisser03visser,kodkodTJ2007},
such as model checkers, take as input 
a software program and a set of 
formal properties therein and check whether the properties hold
for the program. 
If not, the model checker returns a counter example illustrating
how the program violates the property. 
Such techniques can handle safety properties such as null pointer and
array boundary checks, assume-guarantee properties such as 
pre-conditions and post-conditions, invariants such as data 
structure or 
loop invariants, as well as user assert statements. 
Tools exist to automatically infer such properties from 
code in case they do not exist~\cite{yang04,ErnstPGMPTX2007}.

We view a property as a logical expression
over a set of variables selected from the
program. 
For correctness the property must be true in place
and context for all test cases. 
We consider the smallest term in a predicate that
evaluates to a Boolean value to be an {\it atomic predicate 
term} and we consider the state space of a property as all
possible evaluations to its atomic predicate terms. 
An atomic predicate term is usually referred to in the literature 
as a clause.

Given a software program, a set  
of properties therein, and a test suite, 
we propose a novel {\em property based coverage} (PBCOV) metric.
The metric measures 
the state space of the properties covered by the test suite. 
The state space can be excessively large and many of its states
may be infeasible, we measure the tested states
against an overapproximation of the reachable state space.
We first obtain the covered states via instrumenting the 
program and the properties~\cite{crest}. 
We then compute the missing states using an equivalence 
check~\cite{abc} between a symbolic definition of the properties
and a truth table definition of the covered states. 
We compute an overapproximation of the reachable state space by
performing a symbolic feasibility check on the missing states
against an abstraction of the program using a satisfiability
modulo theory (SMT) solver~\cite{yices}. 

We evaluated PBCOV with the Traffic Alert and Collision Avoidance 
System (TCAS)~\cite{tcas_impl}
benchmark from the Software-artifact Infrastructure 
Repository~\cite{doESE05}. 
We found the TCAS benchmark attractive due 
to its wide use in the literature and because 
properties already exist for it that were defined in 
recent work~\cite{503230}; also not injecting 
our own properties reduces the bias in our experiments.
PBCOV was able to detect deficiencies in the test suite shipped 
with the TCAS benchmark that achieved full or close to full 
coverage with traditional statement, call, branch, decision, data,
and control flow coverage techniques. 
PBCOV also often suggested new test cases that completed the test suite.

In this paper we make the following contributions. 
\begin{enumerate}
\item We present a coverage technique based on 
formal properties that describe correctness of programs.
\item We present a technique to overapproximate the reachable
state space of properties and use that to define a novel 
coverage metric.
\item We present an implementation of PBCOV and we show with 
experimental results that it can detect deficiencies in test suites
deemed very effective with other traditional coverage techniques. 
\end{enumerate}

The rest of this paper is organized as follows. 
We present background information in Section~\ref{s:background}. 
In Section~\ref{s:motivation} we present a motivating example
of PBCOV. 
We compare against related work in Section~\ref{s:related}.
In Sections~\ref{s:pbcov} and ~\ref{s:implementation} we present the 
PBCOV technique and discuss
its implementation details respectively. 
We present our results and experiments in Section~\ref{s:results}. 
Finally, we conclude in Section~\ref{s:conclusion}.

\section{Background }
\label{s:background}

In this section we present preliminary information about
techniques and tools necessary to describe our work.
%People versed in the field may be able to skip this part. 

A property $P: V \mapsto \{\mbox{true}, \mbox{false}\}$ is defined as 
a Boolean expression embedded in a software
program $S$ that maps the variables of the program and its own variables
to a Boolean value. 
We define a predicate term of a property as any subexpression $E$ of
$P$ that evaluates to a Boolean value. 
We define the atomic predicate terms of $P$ as the set of the 
smallest predicate terms of $P$. 
For example the property $(i<j \wedge j<k ) \rightarrow i<k$ has three
atomic predicate terms $\{ i<j, j<k, i<k\}$.
%We shortly refer to them as terms in the context of the paper 
%especially when we mention term based property coverage. 

We use the Crest~\cite{crest} tool to instrument the software program
and the properties. 
Crest takes a program written in the C programming language and 
instruments
it using the C intermediate language (CIL)~\cite{cil} platform. 
The instrumented code follows the execution of the program over a 
concrete input while constructing a symbolic expression $\Phi$ 
representing the path condition of the program at every executed 
statement. 
Once the execution is done, Crest modifies 
$\Phi$ to represent a path that the program has not taken yet and stores
that in $\Phi'$. 
It then passes $\Phi'$ to 
Yices~\cite{yices}, a satisfiability modulo theory 
(SMT) solver, to compute an evaluation $e$ of the prgram input variables 
that can take the program to the desired path. 
If the query is satisfiable, Crest repeats its execution again with $e$ 
as input 
and keeps repeating this procedure until all paths have been covered.

We also use the model checker ABC~\cite{abc} to compute a Boolean 
equivalence check
between two Boolean expressions. 
The model checker ABC operates on sequential Boolean circuits and uses 
transformation-based verification techniques to check properties of 
these circuits. 

We compare our results against metrics computed using GCOV~\cite{gcov} 
and ATAC~\cite{atac}.
The coverage tool GCOV computes four coverage metrics. It computes 
the percentage of executed source lines, executed branches, 
branches taken at least once,
and function calls executed in the program.

The coverage analysis tool ATAC~\cite{atac} measures block, decision, 
C-use, and P-use coverage of a program. 
Even though the tool dates to 1992, it is still in main 
use to compute data flow coverage~\cite{atac_2009}.
Block coverage measures the percentage of the basic blocks that are 
executed at least once where 
a basic block is a maximal set of instructions without branches or 
function calls. 
Decision coverage reports whether each condition in the program 
evaluates to both \CodeIn{true} and \CodeIn{false} at least once 
during the testing phase. 
Computational use (C-use) and predicate use (P-use) refer to a 
combined measure of control and data flow of the program. 
They track the definitions and usages of variables.
A definition is typically an assignment statement where the variable
appears on the left hand side. 
A computational use is a use of the variable in a 
computation such as an arithmetic
expression, and a predicate use is a use of the variable in a 
predicate expression that evaluates to a Boolean value. 
The C-use measure ensures that there is at least one path between 
the definition and a computational use of a variable. 
The P-use ensures that there is at least one path between the 
definition of the variable and the \CodeIn{true} and \CodeIn{false} 
evaluations of a predicate containing the variable. 

\section{Motivation}
\label{s:motivation}

%\begin{table} [!t]
%\renewcommand{\arraystretch}{1.5}
%\caption{Selection sort }
%\label{t:sort}
%\centering
%\begin{tabular} {p{5.2cm}p{4.5cm}}
%%\begin{Verbatim}[fontsize=\relsize{0},numbers=left,numbersep=4pt,frame=topline,framesep=4mm],label=\fbox{Sort function}]
%\begin{Verbatim}[fontsize=\relsize{0},numbers=left,numbersep=4pt]
%void sort ( int a[], int n) {
%  int current, j, lowestindex, temp;
%  
%  for (current=0; current < n-1; current++){
%    /*find the minimum*/
%    lowestindex = current;
%    for (j=current+1; j < n; j++) {
%      if (a[j] < a[current]) { /* bug here*/
%        lowestindex = j;
%      }
%    }
%    /* swap */
%    if (lowestindex != current) {
%      temp = a[current];
%      a[current] = a[lowestindex];
%      a[lowestindex] = temp;
%    } } }
%\end{Verbatim} 
%& \\
%\end{tabular}
%\end{table}

The following code %Table~\ref{t:sort} 
describes a 
textbook~\cite{findTheBugBarr02} example of selection
sort with a seeded bug. 

%\begin{Verbatim}[fontsize=\relsize{0},numbers=left,numbersep=4pt,frame=topline,framesep=4mm],label=\fbox{Sort function}]
\begin{Verbatim}[fontsize=\relsize{-1},numbers=left,numbersep=4pt]
void sort ( int a[], int n) {
  int current, j, lowestindex, temp;
  
  for (current=0; current < n-1; current++){
    /*find the minimum*/
    lowestindex = current;
    for (j=current+1; j < n; j++) {
      if (a[j] < a[current]) { /* bug here*/
        lowestindex = j;
      }
    }
    /* swap */
    if (lowestindex != current) {
      temp = a[current];
      a[current] = a[lowestindex];
      a[lowestindex] = temp;
    } } }
\end{Verbatim} 

The function \CodeIn{sort} takes as input an array \CodeIn{a} 
of size \CodeIn{n}.
The variables \CodeIn{current} and \CodeIn{j} are the iterators of
the outer and inner loops, respectively. 
The variable \CodeIn{lowestindex} holds the index of
the minimum element so far in the array and \CodeIn{temp} is
used to perform the swap on Line 14. 
The inner loop selects the minimum element in 
\CodeIn{a} starting from \CodeIn{a[current]}.
The test suite $T=\{
t_1=[1~2~3~4~5~6~7~8~9],
t_2=[9~8~7~6~5~4~3~2~1],
t_3=[2~1~3~4~5~6~7~8~9],
t_4=[9~2~3~4~5~6~7~1],
t_5=[9~8~7~4~5~6~7~8~3~2~1],
t_6=[1~2],
t_7=[1], 
t_8=[~]\}$
consists of nonsorted arrays of different sizes, a sorted
array $t_1$, a reversed sorted array $t_2$, and boundary
case arrays $t_7$ and $t_8$. 
We test \CodeIn{sort} with $T$ and we compute 
statement, call, branch, and decision coverage using GCOV~\cite{gcov}. 
We achieve full coverage for all of the above. 

Test suite $T$ achieves full C-use coverage
except for the definition 
\CodeIn{lowestindex=current} on Line 6
and the use
\CodeIn{ a[current]=a[lowestindex] } on Line 15. 
This definition-use pair is not feasible 
because the execution of the use is in contradiction
with the predicate governing the condition
\CodeIn{lowestindex != current} 
on Line 13. 
This can be fixed by removing the condition since
swapping an element with itself does not alter
the functionality.

Test suite $T$ also achieves full 
P-use coverage except for three 
infeasible P-use pairs. 
The first is the definition 
\CodeIn{j=current+1} on Line 7 
and the
\CodeIn{false} evaluation of the loop predicate
\CodeIn{j < n} on Line 7 also. 
Notice that this is never possible since \CodeIn{current}
is bounded by \CodeIn{current < n-1} on Line 4. 
This can be fixed by relaxing the bound in Line 4 to
be \CodeIn{current < n}. 
The second infeasible pair is the definition  
\CodeIn{lowestindex=current} on Line 6
and the \CodeIn{true} evaluation of predicate 
\CodeIn{lowestindex != current} 
on Line 13.
The last infeasible pair is the definition 
\CodeIn{lowestindex = j}
on Line 9
and the \CodeIn{false} evaluation of the predicate
\CodeIn{lowestindex != current} on Line 13. 
This is also impossible since \CodeIn{j} is guaranteed
to be different than \CodeIn{current} as it starts
at \CodeIn{current+1} on Line 7 and only gets incremented later.
The last two P-uses can be fixed also by removing the 
condition on Line 13 without affecting the functionality.
We computed C-use and P-use data flow coverage using
ATAC~\cite{atac}. 

However, \CodeIn{sort} has a bug and the inner loop
does not actually select the minimum. 
To check, inspect Line 8 which does not compare
against the minimum \CodeIn{a[lowestindex]}
and erroneously checks against \CodeIn{a[current]}. 
We conclude that $T$ is a deficient test suite that scored
full coverage using traditional techniques. 
This motivates our work on property-based coverage. 

We introduce a property $P$ in \CodeIn{sort} claiming that 
at the end of execution and for an arbitrary element \CodeIn{a[k]}
the next element is in order
\CodeIn{int k; assert( !(0<=k \&\& k<n-1) || (a[k]<=a[k+1]) )}. 
This property has three atomic predicate terms $p_1=(0\le k)$, 
$p_2=(k < n-1)$ and $p_3=(a[k]\le a[k+1])$. 
Test suite $T$ excites all possible evaluations to 
$\langle p_1, p_2, p_3\rangle$ except for the evaluation
$e=\langle \mbox{true}, \mbox{true}, \mbox{false} \rangle$ which 
describes two elements in the array that are not in order. 
Unlike the three infeasible reported P-uses of the data flow technique, 
$e$ is feasible and PBCOV returns the test case $[3~1~2~4]$ which completes $T$. 
%We automatically compute the missing evaluation $e$ with an
%equivalence check~\cite{abc} between a static definition of 
%$P$ versus a partial truth table definition collected from the execution 
%of $T$ over \CodeIn{sort} with $P$. 
%We compute the feasibility of the missing evaluation and sometimes 
%an actual test case using a bounded symbolic check~\cite{crest}
%on \CodeIn{sort} with $e$ assigned to $P$.
%In this case we obtain the test case $\{3 1 2 4\}$ which completes
% the test suite. 

\section {Related Work}
\label{s:related}

Coverage metrics measure the adequacy and completeness of a test suite. 
They are divided into code based, control-flow,
and data flow based coverage metrics.
Code based techniques, such as statement and branch coverage, identify 
points in the program that are not 
covered by the existing set of test cases.
This category encompasses also condition coverage (also known as 
predicate coverage)~\cite{amannBook}
that checks whether
a Boolean condition in a program evaluates to its two possible values. 
Multi-condition coverage, a relatively expensive approach,  
checks whether all possible combinations of 
the constituent conditions have been covered. 

Control flow based techniques measure the control flow complexity by 
extracting control flow directed graphs where nodes represent
entries, exits, and decisions and edges represent non-branching 
statements. 
For example, the basis path coverage technique uses cyclomatic 
complexity to determine the number of linearly 
independent paths that it calls the basis set. 
Unlike code based techniques, this metric ensures that every decision 
outcome is checked independently of one another~\cite{path}.

While one might think of PBCOV as
an all combinations condition coverage where each atomic
property is considered as a Boolean condition, PBCOV differs
in that a property does not alter the state of the program
while the body of the decision blocks might do that.
PBCOV can compute the values of all its
atomic predicate terms for a given test case without the need 
to execute all the branches therein. 
The PBCOV metric also measures the executed combinations against
an over approximation of the feasible combinations and not against 
all combinations.

The modified condition/decision coverage (MC/DC) metric requires 
that each atomic predicate term be shown to independently determine 
the outcome of each predicate. 
This is also referred to as making the term active. 
That is, for each term $t$, there are two test cases in which the 
truth values of all evaluated clauses except $t$ are the same, 
and the predicate as a whole evaluates to \CodeIn{true} for one of 
those test cases and \CodeIn{false} for the other. 
Note that MC/DC is also called Restricted Active Clause 
Coverage~\cite{amannBook}.

%as. 
%For a given variable $v$ it checks that there is an
%execution path from every definition of $v$ to every consequent 
%use of $v$ before another definition of $v$ is executed. 
Data flow metrics include definition and use pair coverage.
The above categories are all based on the structure 
and not on the correctness of the program behavior. 

Specification based coverage (SBC)~\cite{ammanSpecCov} is the most 
relevant work in the literature to our approach. 
It measures the adequacy of a test suite against a set of specifications
expressed in computational tree logic (CTL)~\cite{EdOrPe}.
SBC uses a model checker against the traces of the test suite 
independent from the code.
PBCOV differs in that it uses the code to compute an overapproximation 
of the reachable state space of the properties. 
SBC uses mutation analysis~\cite{mutation} where it mutates 
specification elements such as variables and operators to compute
several inaccurate specifications that the test cases should be able 
to kill. 
The mutants and the test cases are fed to a model checker~\cite{SMV} 
where the test cases are the reference. 
The more mutants the test cases can kill, the better they are. 
Note that the mutant space is infinite and SBC can only generate a 
finite set of mutants.
We differ in using an SMT solver, which allows an infinite state space, 
to compute a sound symbolic overapproximation of the program that might
have an infinite state transition system. 
%Notice that the mutants may underapproximate or overconstrain the 
%specification system
%rather than present a false behavior and in that case we may falsely
%rate a test suite that is under utilizing the program under test. 
PBCOV also differs in that we use the model checker~\cite{abc} only to 
compute the missing states in the finite property state space.
In general, since PBCOV tests the properties against an 
overapproximation of only one mutant,
which is the program itself, it is computing a simpler and safe metric.

Finally, SBC requires the developer to
provide a set of specifications adequate for test suite measurement.
For example mutation analysis is sensistive to the length of 
the specification and can produce better results with shorter 
specifications~\cite{ammanSpecCov}.
This is why~\cite{ammanSpecCov} introduces techniques to shorten
the CTL specification by dropping redundant predicates.
PBCOV allows the user to write 
assume and guarantee statements embedded in the code as 
simple user assert statements where 
no special care is needed and equivalent properties with
different syntax do not affect the analysis.

%TODO: compare against techniques in the book of testing that
%      discuss logic coverage metrics

\section{Property Based Coverage}
\label{s:pbcov}

Property based coverage takes a software program $S$ with 
a set of properties therein $P$ and a test suite
$T$ and reports the adequacy of $T$ in assessing the behavior of $S$
as formally specified by $P$. 

Each property in $P$ is a Boolean expression composed of atomic
predicate terms and binary and unary Boolean operators such as
conjunction, disjunction and negation. 
The state space of a property $p_i$ in $P$ is all the possible
evaluations of its atomic predicates. 
The state space is not fully reachable in most cases due to 
dependencies between the atomic predicate terms as well as due 
to the structure of the program. 
Notice that an unreachable state due to the structure of the program
may well be a bug in the program especially if a property evaluates to 
\CodeIn{true} on that missing state. 

PBCOV runs $T$ against $S$ and computes all the states 
of $P$ that $T$ covers. 
Each state is represented as a conjunction of the evaluation 
of each atomic predicate term.
It also computes a symbolic representation of $P$ as a conjunction
of all the properties in the code. 
Then PBCOV runs an equivalence check comparing the symbolic definition 
of $P$ against the disjunction of all the covered states.
The check returns the missing states.
The missing states may be states where $P$ evaluates to 
\CodeIn{true} and in that case we think that $T$ may not be executing
all the specified behavior of $S$. 
They might also be states where $P$ evaluates to \CodeIn{false}
and in that case we think that $T$ may be missing some bugs in $S$. 
In turn, the missing states might just be unreachable due to the 
dependencies amongst the atomic predicate terms or due to the 
details of the implementation. 
A reachability analysis of the state space may answer both questions.

We use an SMT solver to check the feasibility of the missing states. 
We compute a system of symbolic formulas expressing the atomic predicate
terms and pass them to the SMT solver with a satisfiability check 
on the missing states.
An SMT solver returns either a satisfying answer in case the SMT formula
is satisfiable, an unsatisfiable answer in case the SMT formula is 
unsatisfiable, and an unknown answer in case the SMT formula is
hard to compute. 

In PBCOV we overapproximate the reachable states of the properties
by discarding the
unsatisfiable SMT responses and replacing some of the
atomic predicate terms in the SMT formula with free Boolean variables. 
We also attempt to compute the feasibility of a missing state $e$ 
by including a symbolic cut of $S$ which overapproximates $S$ in an 
attempt to generate a possible
test case with legal inputs to $S$.
In case that succeeds, we add the test case to $T$ and repeat the 
computations.


\section{Implementation}
\label{s:implementation}

We implemented a tool that computes the PBCOV metric using
an instrumentation that dynamically evaluates the atomic predicate 
terms of a property. 
The implementation works for C programs and makes use of 
available open source tools. 
We used the Crest~\cite{crest} tool to interface with CIL~\cite{cil} 
code manipulation framework
and the Yices~\cite{yices} SMT solver. 
We also used the ABC~\cite{abc} model checker. 
For further details about these tools refer to Section~\ref{s:background}.

{\bf Source code preparation.}
The user needs to include a header file that has the needed 
macros, breakpoint functions, and pragmas. 
He will also need to specify his properties using the provided 
\CodeIn{PBCOV\_ASSERT} macro. 
The macro flags the property to be to be processed later on.

{\bf CIL instrumentation.}
We instrument the source code that now contains the properties 
using CIL. 
CIL composes all conditions in the code into nested if-statement
structures while preserving the logic. 
The conditions in these if-statements are nothing else than the 
terms described before. 
We capture and process the block of code consisting of the 
if-statements corresponding to 
the property provided by the user. 
We fragment the nested if-statement in this block to hold only 
atomic predicate terms as explained in section \ref{s:pbcov}.
We instrument these to collect their values and their symbolic versions
at run time. 

{\bf Reporting missing states.}
We collect the different term permutations and the value of the 
conjunction of the properties in a file. 
This dynamic information is used to build a dynamic description of 
the property as a simple truth table. 
We construct a Boolean circuit with one Boolean variable per atomic 
predicate term and construct a symbolic Boolean formula
specifying the property. 

We construct a sequential circuit claiming the equivalence of the 
the symbolic Boolean description of the property as well as the 
dynamically collected truth table and we pass the circuit
to the ABC model checker. 

The ABC model checker reports the missing states in either a symbolic 
form or in an enumeration form of states. 
These missing states are counter examples to the formula.
We pass the counter examples with the original SMT form of the 
property to Yices.
We read the output of Yices and discard the unsatisfiable missing
states from the reachable state space. 

\section{Results}
\label{s:results}

We evaluated PBCOV using the TCAS benchmark from SIR~\cite{doESE05}. 
TCAS is an embedded software in an airplane that tries to prevent 
midair collisions. Thus, it is a safety-critical program that needs to 
be properly tested and verified. In theory any implementation of TCAS 
must comply with the TCAS II manual, and many implementations do exist 
in practice. 
The TCAS SIR benchmark is a C-code implementation that encompasses 173 
lines of codes and 9 functions. 
Siemens researchers  provide 41 variants of TCAS, where each variant 
has a seeded bug. 
The 41 versions cover bugs with different types such as mutations
of constants, operands, and operators, logic changes, as well as
missing code. 
PBCOV does well on all these types of bugs with no noticable
variation.
The TCAS benchmark also provides a test suite of 1608 cases. 
The test suite detects the bugs in all of the 41 variants. 

%\begin{table}[!]
%\renewcommand{\arraystretch}{1.3}
%\caption{Bug types according to each version of TCAS}
%\label{bug}
%\centering
%\begin{tabular}{|l|l|} \hline
%bug type & versions affected\\ \hline
%operator mutation & 1, 6, 9, 10, 20, 25, 39\\ \hline
%operand mutation & 2, 21-24, 35, 37, 38\\ \hline
%constant mutation & 7, 8, 13, 14, 16-19, 33, 36\\ \hline
%logic change & 3, 4, 11, 12, 28, 34\\ \hline
%missing code & 5, 15, 26, 27, 29-32, 40, 41\\ \hline
%\end{tabular}
%\end{table}

\subsection{TCAS properties}

TCAS is a two dimensional alert system that alerts the pilot in 
the case of eminent collision. 
TCAS calculates the time it takes two airplanes to be in close contact. 
It also identifies two zones in the vicinity of an airplane. 
The first zone is the larger zone in which TCAS issues a Traffic 
Advisory (TA) alert to announce an incoming airplane. 
In case the threat of collision persists and the airplane is about
to enter the inner and smaller danger zone, TCAS issues a 
Resolution Advisory (RA), with instructions to maneuver either Upward 
by climbing, or Downward by descending.

The function that evaluates the collision status is called 
\verb|alt_sep_test| and takes as input 14 parameters and outputs one 
parameter. 
The output parameter instructs the plane to climb (\verb|UPWARD_RA|), 
descend (\verb|DOWNWARD_RA|), or do nothing (\verb|UNRESOLVED|). 
The relevant input parameters are: \verb|Own_Tracked_Alt|, 
\verb|Other_Tracked_Alt|, \verb|Positive_RA_Alt_Thresh|, 
\verb|Up_Separation|, and \verb|Down_Separation|.

The recent work in~\cite{503230} defines five separate formal properties 
for TCAS. 
These properties specify the relation of expected output to input values.
Each property is the conjunction of two sub-properties, 
where each sub-property has the form of a precondition implying a 
result value. We illustrate property 1 below and refer the reader
to~\cite{503230} for fiurther details. 
\begin {itemize}
\item Property  1 with six terms and 16 possible permutations
\begin{itemize} \small
\item[] $Up\_Separation \geq Positive\_RA\_Alt\_Thresh \wedge Down\_Separation < Positive\_RA\_Alt\_Thresh \mapsto result \ne DOWNWARD\_RA$
\item[]	$Up\_Separation < Positive\_RA\_Alt\_Thresh \wedge Down\_Separation \geq Positive\_RA\_Alt\_Thresh \mapsto result \ne UPWARD\_RA$
\end{itemize}\normalsize
%\item Property  2 - 8 terms, 64 permutations:\begin{itemize} \small
%\item	$Up\_Separation < Positive\_RA\_Alt\_Thresh \wedge Down\_Separation < Positive\_RA\_Alt\_Thresh \wedge Down\_Separation < UP\_Separation \mapsto \\ result \ne DOWNWARD\_RA$.
%\item	$Up\_Separation < Positive\_RA\_Alt\_Thresh \wedge Down\_Separation < Positive\_RA\_Alt\_Thresh \wedge Down\_Separation > UP\_Separation \mapsto \\ result \ne UPWARD\_RA$.
%\end{itemize}\normalsize
%\item Property  3 - 8 terms, 64 permutations:\begin{itemize} \small
%\item	$Up\_Separation \geq Positive\_RA\_Alt\_Thresh \wedge Down\_Separation \geq Positive\_RA\_Alt\_Thresh \wedge Own\_Tracked\_Alt > Other\_Tracked\_Alt \mapsto result \ne DOWNWARD\_RA$.
%\item	$Up\_Separation \geq Positive\_RA\_Alt\_Thresh \wedge Down\_Separation \geq Positive\_RA\_Alt\_Thresh \wedge Own\_Tracked\_Alt < Other\_Tracked\_Alt \mapsto result \ne UPWARD\_RA$.
%\end{itemize}\normalsize
%\item Property  4 - 4 terms, 16 permutations:\begin{itemize} \small
%\item	$ Own\_Tracked\_Alt > Other\_Tracked\_Alt \mapsto \\ result \ne DOWNWARD\_RA$.
%\item	$ Own\_Tracked\_Alt < Other\_Tracked\_Alt \mapsto \\ result \ne UPWARD\_RA$.
%\end{itemize}\normalsize
%\item Property  5 - 4 terms, 16 permutations:\begin{itemize} \small
%\item	$Down\_Separation < UP\_Separation \mapsto \\ result \ne DOWNWARD\_RA$.
%\item	$Down\_Separation > UP\_Separation \mapsto \\ result \ne UPWARD\_RA$.
%\end{itemize}\normalsize
\end {itemize}

Note that in some properties the term as well as its negation exists
and as such both are regarded as one term when the number of 
possible permutations is computed.

\subsection{Experimental Setup}

There are several variants of the TCAS program, specifically, 
one correct version and 41 buggy versions. 
Its test suite which exposes the seeded bugs, exhibits high 
coverage using traditional metrics as measured by ATAC~\cite{atac} and GCOV~\cite{gcov}.

We refer to the original test suite comprising all 1608 test cases, 
as {\em full} test suite. 
And we refer to the reduced test suite that excludes bug exposing test cases, as {\em full-bug}. 

The coverage metrics of these 2 test sets will be computed and compared using traditional methods. 
And as shown later, even though the {\em full} test suite reveals the bugs but {\em full-bug} does not,
the computed coverage values are the same in most cases and very close in the others.
We also evaluate both test suites with PBCOV.

This methodology shows that while traditional coverage metrics are not 
sensitive to the behavior of the program specified in the properties, 
PBCOV does report a deficiency in the test suite. 
This also can measure whether the properties specified are of utility 
to uncover the bugs seeded in the program.

We repeat the same process by removing for each version the cases that violate the properties
from the {\em full-bug} test suite and call the resulting test suite 
{\em full-bug-assert}.
We do this by removing all the test cases that result in the property
evaluating to \CodeIn{false}.
While the {\em full-bug-assert} test suite will have less state space 
coverage by design we are interested in evaluating it using
the traditional coverage metric techniques. 
In most cases, as we show later, {\em the full-bug-assert} has the same 
coverage as the {\em full} and the {\em full-bug} test suite under ATAC and GCOV. 
We use this as evidence of the utility of PBCOV in detecting deficiencies of test suites. 

\subsection{Scenarios}

We refer to the property where we combine all the five properties 
in one conjunction as the six$^{th}$ property.
For each version and under each of the six properties, we constructed 
the three {\em full}, {\em full-bug}, and {\em full-bug-assert} test suites. 
The first two test suites are version dependent and we run under each 
version to measure their coverage using ATAC and GCOV. 
We run the {\em full-bug-assert} test suite for each version and property, 
since it is property specific, and we measure the coverage under ATAC 
and GCOV. 
Finally, we also measure the property based coverage.

The results are summarized in Tables~\ref{prop1_res} to~\ref{prop6_res}.
Each table corresponds to one property, and compares the different 
coverage metrics. 
Note that in each table we show interesting cases, detailing the 
coverage under ATAC, GCOV, and PBCOV. 
The column headings in Tables~\ref{prop1_res} to~\ref{prop6_res} are explained in Table~\ref{short}. 
In each row, we show the coverage information for each version, under 
two or three test suites. 
In cases were {\em full-bug-assert} is missing it is implied that it has 
the exact coverage as {\em full-bug}.
We also show versions with identical coverage information on the same row.


\begin{table}[!t]
\renewcommand{\arraystretch}{1.3}
\caption{Coverage metrics and shortened notation}
\label{short}
\centering
\begin{tabular}{|l|l|}
\hline
metric & short-hand\\
\hline
\% blk & \% blocks \\ \% dec& \% decisions \\
\% C-U& \% C-Use pairs \\ \%P-U & \% P-Use Pairs\\
\% line& \% lines executed \\ \% brch& \% branches executed\\
\% TALO& \% branches taken at least once \\ \% Call& \% calls executed\\
\#1& \# of permutations where property is true \\ \#2& \# of permutations where property is false\\
\hline
\end{tabular}
\end{table}

\input{pbcov_tables}

\subsection{Discussion}

In this section we highlight several important observations 
that we support with detailed examples from Tables~\ref{prop1_res} to~\ref{prop6_res}
in the following subsections.

\textbf{\textit{First}}, PBCOV is 
better than ATAC and GCOV in most cases
and at least as good as them in the other cases. 
When we compare a test suite that detects the bug and one that 
does not,
PBCOV reports less coverage with the latter,
while ATAC and GCOV show the same 
coverage in most cases, and marginally less coverage with the
latter in few cases. 

\textbf{\textit{Second}}, while TCAS and GCOV provide 
coverage above 90\% in most of the test suites, whether reduced or not, 
PBCOV provides less than 50\% in some cases. 
This exposes deficiencies in the test suite which should exercise the
behavior of the program rather than the structure of the program.

\textbf{\textit{Third}}, some properties are not 
enough to describe the program as they fail to expose the bug in several 
cases. 
We notice this in Table~\ref{prop6_res} where all the properties are 
combined and the term permutations  
that evaluate the property to \CodeIn{false} do not change between the 
{\em full} test suite, and the {\em full-bug} test suite. This can be seen in the 
$2,3,7,8,\ldots,38,39$ row. 

The \textbf{\textit{fourth}} point concerns restoring the 
permutations induced by removed test cases that expose the bug. 
We were able to restore removed test cases that induce the missing 
terms in all cases using the overapproximation reachability analysis. 
In the following we comment on the above observations in details for
each of the properties.

\subsubsection{Property 1}

Table~\ref{prop1_res} confirms the observations noted at the beginning of this section for property 1. 
For instance, versions 1, 31, 32, and 37 show the same coverage 
information under ATAC and GCOV for both the {\em full} and {\em full-bug} test suites.
Note that the {\em full-bug} test suite doesn't detect the bug and 
thus its high coverage might mislead testers. 
PBCOV reports a reduction in coverage in the test suite hiding the bug 
where only 10 out of possible 16 evaluations are reported. 

Other versions like 4 and 36, show that the reduced test suites show 
coverage less than that of the {\em full} test suite under PBCOV 
as well as traditional coverage. 
This confirms that whenever GCOV and ATAC show lower coverage, 
PBCOV does the same. 

The last row is interesting as PBCOV has no effect in exposing bugs 
in these versions. This is the case simply because the bug
seeded in these versions does not violate property 1.
Note that other properties expose the bugs in these versions as will 
be noted later.

\subsubsection{Property 2}

Similar arguments to those of property 1 can be made about 
Table~\ref{prop2_res}.
Versions 1, 15, 31 (not shown in the table), 32, 37, and 41 
show similar results as those of property 1. 
However, the interesting cases occur in the versions outlined in line 2 of Table~\ref{prop1_res}. 
The test suite {\em full-bug-assert} shows lower PBCOV by construction, 
but still has the same high coverage under ATAC and GCOV. 
As a result, adding more cases to the {\em full-bug-assert} test suite 
will not affect the traditional coverage metrics, but will 
increase space state coverage which will help in exposing the bug. 

Finally we note that version 22 does not show any 
difference in coverage in all techniques. 
This is due to the fact that property 2 does not specify the 
full behavior of the code. 
Furthermore, versions 4 and 36 show decreased coverage in all 
techniques when comparing the {\em full} and {\em full-bug-assert} 
test suites.

\subsubsection{Properties 3 and 4}

The same arguments made for properties 1 and 2 about points one to
four hold for the results of properties 3 and 4 described in 
Tables~\ref{prop3_res} and~\ref{prop4_res}.
However, we note that the {\em full-bug} test suite shows less 
state space coverage for 
both versions 6 and 11, in addition to version 10 with property 4 only. 
This was not the case with both properties 1 and 2. 
Moreover, version 22 has less PBCOV under the
{\em full-bug-assert}  test suite, which also was not the case with 
properties 1 and 2. 
Other versions show similar results as in Tables~\ref{prop1_res} and~\ref{prop2_res}.

\subsubsection{Property 5}

Table~\ref{prop5_res} lists the results for property 5. 
We note that the {\em full-bug} test suite shows less PBCOV 
for both versions 28 and 35. 
This was not the case with the previous properties, with the 
exception of property 2.%  (not shown in Table~\ref {prop2_res}). 
Version 40 has decreased PBCOV under 
{\em full-bug-assert}, which was also not the case in properties 1 to 4.
Other versions show similar results as in 
Tables~\ref{prop1_res} to~\ref{prop4_res}, especially version 9.

\subsubsection{All properties combined}

The last table in this results section is Table~\ref{prop5_res}, 
and it details the results for the five properties combined. 
The combined property is simply the conjunction of the five properties. 
The results listed are not surprising. 
For each version, where PBCOV was less in the {\em full-bug} 
test suite, 
the situation did not change as this is a version specific result and 
it does not depend on any property (note row 1 in the table for example).
However, other versions show the accumulated effect of the five 
properties, removing cases that evaluate all properties to \CodeIn{false}, results in removing hundreds of test cases which reduces 
the coverage under GCOV and ATAC to as low as 76\% in some cases. 
Nevertheless, the combined property has 10 terms, which results in a 
total of 256 permutations when removing infeasible terms. 
In the dynamic evaluation there was no case where the total number 
of term permutations exceeded 30. 
This resulted in a low space state coverage, even though the coverage 
under ATAC and GCOV didn't go below 70\% in all cases.
 
In summary, the experimental results confirm all the observations 
noted at the beginning of this section.
We also noticed that all the versions 
(some omitted due to space considerations) showed lower PBCOV than all 
other techniques. This suggests that PBCOV is good at exposing 
deficiencies in test suites but needs more work to be able to
express their completeness. 
The detailed results and an implementation of this technique
will be available 
\href{http://webfea.fea.aub.edu.lb/fadi/dkwk/doku.php?id=pbcov}{online}.


\section{Conclusion}
\label{s:conclusion}

In this paper we presented PBCOV as a novel property
based coverage technique and we showed strong experimental 
results that compares it against other traditional 
coverage techniques. 
PBCOV measures the covered state space of properties within
a software program against an overapproximation of the
reachable state space using an SMT solver and an instrumentation
of the code. 
PBCOV showed to be effective in detecting the defficiencies 
in test suites. 
In the future we plan to extend the work to measure the
completeness of the properties themselves as we 
noticed that some of the induced bugs in the test suite
escaped a combination of the properties. 
We also plan on improving PBCOV to be able to express the
completeness of the test suite well and not only
characterize its defficicency.

{\small 
\bibliographystyle{abbrv}
\bibliography{my}  
}

\end{document}
