\section{Approach}
\label{sec:approach}
As manual selection of test cases for regression testing is tedious and error-prone, we have developed three techniques to automate 
selection of test cases for security policy evolution.
Consider that program code interacts with a PDP loaded with a policy $P$.
Let $P'$ denote $P$'s modified policy.
Let $S_P$ denote program code interacting with $P$.
For regression-test selection, 
our goal is to select $T'$ $\subseteq$ $T$ where $T$ is an existing test suite and $T'$ reveals
different system behaviors due to the modification between $P$ and $P'$. 

\begin{figure}[t]
    \centering
        \includegraphics[width=3.5in]{example_mutant.eps}
        \vspace{-12pt}
    \caption{\label{fig:rdcexample}An example mutant policy by changing the first rule's decision (i.e., effect).}
    \vspace{-10pt}
%    \vspace{+3pt}
\end{figure}


\subsection{Test Selection Based on Mutation Analysis}
% (i.e., rule-test correlation)
Our first technique first establishes correlation
between rules and test cases based on mutation analysis before regression-test selection.

%This step establishes correlation as follows.

\textbf{Correlation between rules and test cases.}
For rule $r_i$ in $P$, we create $P$'s rule-decision-change (RDC) mutant $M(r_i)$ by changing $r_i$'s decision (e.g., Permit to Deny).
Figure~\ref{fig:rdcexample} illustrates an example mutant by changing the decision of the first rule in Figure~\ref{fig:example}.
The technique next executes $T$ on $S_P$ and $S_{M(r_i)}$, respectively,
and monitors evaluated decisions. If the two decisions are different for $t \in T$, the technique establishes correlation
between $r_i$ and $t$.
%\textbf{Syntactic Policy Changes.} 




\textbf{Regression-test selection.}
This step selects test cases correlated with rules
that are involved with syntactic changes between $P$ and $P'$.
In particular, this technique analyzes syntactic difference, \CodeIn{SDiff}, between $P$ and $P'$
(e.g., a rule's decisions or locations are changed) and identifies rules that are involved in the syntactic difference.

%This step first analyzes syntactic difference between $P$ and $P'$
%(i.e., which rules are changed due to the policy changes).
%Once these rules (which are reflected by syntactic difference) are identified, 
The drawback of this technique is that it requires the correlation step, which could be costly in terms of execution time.
This technique executes $T$ for 2$\times$$n$ times
where $n$ is the number of rules in $P$.
Moreover, if the policy is modified, the correlation step should be done again for the changed rules.
As this regression-test selection is based on \CodeIn{SDiff}, this technique may select rules
that may not be involved with actual policy behavior changes (i.e., semantic policy changes).



\subsection{Test Selection Based on Coverage Analysis}

To reduce the cost of the correlation step in the preceding technique, our
second technique correlates only rules that can be evaluated (i.e.,
covered) by test cases.


\textbf{Correlation between rules and test cases.}
Our technique executes test cases $T$ on $S_P$ and monitors 
which rules are evaluated for
requests issued from the execution of test case $t \in T$.
Our technique establishes correlation between a rule $r_i$ and $t_i \in T$
if and only if $r_i$ is evaluated for requests issued from PEPs while executing $t_i$. 

\textbf{Regression-test selection.} We use the same selection step in the preceding technique.
An important benefit of this technique is to reduce cost in terms of execution of test cases. 
This technique requires executing $T$ only once. 
Similar to the preceding technique, this technique finds the modified rules based on \CodeIn{SDiff}
between $P$ and $P'$, which may not be involved with actual policy behavior changes.


\subsection{Test Selection Based on Recorded Request Evaluation}
To reduce correlation cost in the preceding techniques, we develop
a technique that does not require correlation between test cases and rules.
The third technique executes $T$ on $S_P$.
The technique captures and records requests $R_rs$ issued from PEPs
while executing $T$ on $S_P$.
For test selection, our technique evaluates $R_rs$ against $P$ and $P'$. Our technique selects test case $t \in T$
that issues requests engendering different decisions for $P$ and $P'$.

This technique requires the execution of $T$ only once.
%Moreover, different from the two preceding techniques,
%this technique does not require the availability of policies
%in practice.
Moreover, this technique is useful especially when polices are not available,
but only evaluated decisions are available.
As different decisions are reflected by actual policy behavior changes (i.e., semantic changes) between $P$ and $P'$, 
this technique can select fault-revealing test cases more effectively.

%reveal different evaluation results.


 
\subsection{Safe Test-Selection Techniques}
\label{subsec:safety}

A test-selection algorithm is \CodeIn{safe} if the algorithm
includes the set of every fault-revealing test case that would reveal faults in a modified version. In our work, the first test-selection technique is \CodeIn{safe} when a policy uses the \Intro{first-applicable} algorithm.
If the policy uses other combining algorithms, we use our previous approach~\cite{liu08:xengine} to convert
the policy to its corresponding policy using the \Intro{first-applicable} algorithm.
The second and third techniques are \CodeIn{safe} for any policies specified in XACML.
Due to space limit, proof of safety of our three techniques is presented on our project website\footnote{\url{
http://research.csc.ncsu.edu/ase/projects/regpolicy/}}.
 



%\Comment{
%\subsection{Safe Test-Selection Techniques}
%\label{subsec:safety}
%
%Test-selection algorithm is \CodeIn{safe} if the algorithm
%include the set of every fault-revealing test case $F$ $\subseteq$ $T'$ that would reveal faults in a modified version. In our work, the first test-selection technique is \CodeIn{safe} when a policy uses the \Intro{first-applicable algorithm}.
%If the policy uses other combining algorithms, we use our previous approach~\cite{liu08:xengine} to convert
%the policy to its corresponding policy using the \Intro{first-applicable algorithm}.
%The second and third techniques are \CodeIn{safe} for any policies specified in XACML.
%
%Rothermel and Harrold~\cite{rothermel97:safe} determine $F'$ by selecting every test case $t$ $\subseteq$ $T'$  that executes ``dangerous'' portions. To identify changes of program behaviors, it is necessary to identify statements in application code that
%are dependent on the modification.
%
%For our techniques, as shown in Figure~\ref{fig:testexecution}, policies are specified separately from application code.
%Similar to Rothermel and Harrold~\cite{rothermel97:safe}'s approach, our approach identify rules $M_r$ that 
%are dependent on the modification.
%
%\begin{definition}[Rule] a rule $r$ is a statement: ``if $c$ then $d$'', where constraint c 
%a predicate expression on AC attributes (subjects, objects, or actions) and conditions for the decision $d$ where $d$ is either Permit or Deny.
%\end{definition}
%
%\begin{definition}[Modified Rules] 
%If a rule $r$ in a policy is added, moved, removed, or changed,
%we say that the rules are dependent on the modification.
%These modified rules introduce syntactic difference for the policy.
%\end{definition}
%
%To find modified rules $M_r$, we determine modifications of a policy, which
%affect policy changes.
%
%In the policy, rules are evaluated independently, and
%rule addition/deletion/modification may cause modifications of a policy.
%We calculate such modification based on either syntactic changes $M_r'$ or semantic changes $M_r''$ between two policies.
%Syntactic changes introduce semantic changes $M_r''$ $\subseteq$ $M_r'$. However, some of syntactic changes
%may not introduce semantic changes due to various reasons such as overlapping between two rules.
%While our first two techniques are based on  $M_r'$,
%the third technique is based on $M_r''$. 
%
%
%
%XACML policies can be described in a tree-based policy structure. 
%
%E.g., 1.
%example 2.  modification.
%The rules are modified due to it.
%}





 

%\begin{definition}[Syntactic Difference]
%%A policy includes a rule $r$ is a statement: ``if $c$ then $d$''.
%If a rule $r$ in a policy is added, moved, removed, or changed,
%we say that the rule introduces syntactic difference for the policy.
%\end{definition}
 
%\begin{definition}[Semantic Difference]
%%A policy includes a rule $r$ is a statement: ``if $c$ then $d$''.
%If a rule $r$ in a policy is added, moved, removed, or changed,
%we say that the rule introduces syntactic difference for the policy.
%Due to this modification, if a request $r$ is evaluated to a different
%decision, we say that the rule introduces semantic difference for the policy.
%\end{definition}

%rule is changed syntactically in a policy.
%
%, where constraint c 
%a predicate expression on AC attributes (subjects, objects, or actions) and system
%states for the decision $d$ where $d$ is either Permit or Deny.
%\end{definition}

%\begin{definition}[Syntactic Difference]
%Addition 
%A policy includes a rule $r$ is a statement: ``if $c$ then $d$''.
%If the rule is moved, removed, or changed,
%
%, where constraint c 
%a predicate expression on AC attributes (subjects, objects, or actions) and system
%states for the decision $d$ where $d$ is either Permit or Deny.
%\end{definition}



%Evaluation decision against policies are dependent on requests issued from application code.
%As we assume that only policies are evolved, in our work, ``dangerous'' portions are changed policy behaviors, which are reflected by a
%policy structure including a set of rules.


\Comment{
%or does not include conflicting and duplicate rules.
%
%returns what the
%evaluation of the first applicable rule returns
%
%
%We prove that the first technique is not safe because a policy may have duplicate rules that
%overwrite to keep the same policy decisions correalted with a test case. 
%We prove that the second and third technique is safe asssuming that
%a apolicy is static , which means does not cahnge its behaviors, .
%Due to space limitation, we include this page on the site.
%
%~\cite{liu08:xengine}
%
%
%
%
%A test-selection algorithm is \CodeIn{safe} if, under certain well-defined conditions, the algorithm
%include the set of every fault-revealing test case $F$ $\subseteq$ $T'$ that would reveal faults in the modified version.
%As shown in Figure~\ref{fig:testexecution}, policies are specified separately from application code.
%Evaluation decision against policies are dependent on requests issued from application code.
%As we assume that only policies are evolved, in our work, ``dangerous'' portions are changed policy behaviors, which are reflected by a
%policy structure including a set of rules.
%
%Rothermel and Harrold~\cite{rothermel97:safe} determine $F'$ by selecting every test case $t$ $\subseteq$ $T'$  that executes ``dangerous'' portions. To identify changes of program behaviors, it is necessary to identify statements in application code that
%are dependent on the modification.
%
%Similar to Rothermel and Harrold~\cite{rothermel97:safe}'s approach, we identify rules $M_r$ that change policy behaviors.
%
%
%
%
%Proof1) Rule changes impact policy unsafe.
%
%
%To calculate $M_r$, we determine modifications of a policy, which
%affect policy changes. In the policy, rules are evaluated independently, and
%rule addition/deletion/modification may cause modifications of a policy.
%We calculate such modification based on either syntactic changes $M_r'$ or semantic changes $M_r''$ between two policies.
%Syntactic changes introduce semantic changes $M_r''$ $\subseteq$ $M_r'$. However, some of syntactic changes
%may not introduce semantic changes due to various reasons such as overlapping between two rules.
%While our first two techniques are based on  $M_r'$,
%the third technique is based on $M_r''$. 
%
%
%Controlled regression testing the practice of testing P' under conditions
%equivalnt to those wre used to test P.
%
%
%Theorem 2) Our second technique is safe for controlled regression testing.
%Proof) Our second technique is CFSs G (with entroy node E) and G' (with entry node E').
%Compare modification-travelling for P and P'.
%Compare evaluation results are true but not different.
%Coverage places tests that are modification-traverling for P and P'.
%
%Let N and C be nodes in G, and let N' and C' be nodes in G', such that
%(N,C) is an edge in G labeled L and (N',C') is an edge in G' also labeled L.
%N and N' are similar for C and C' if and only if C and C' have lexicographical equival
%
%Theorem 3)  Our third technique is safe for controlled regression testing.
%Proof) Separetly with aan application code and policies. Interacton was seperated.
%Our second technique is CFSs G (with entroy node E) and G' (with entry node E').
%Compare modification-travelling for P and P'.
%Coverage places tests that are modification-traverling for P and P'. Evaluation
%would be place tests that ...
%Compare evaluation results are true but not different.
%
%
%
%Instead of calculating $M_p$ based on $M_r$, out techniques directly map and select test cases with changed policy rules reflected by $M_r$.
%These selected test cases based on $M_r$ cover (i.e., execute) not only $M_r$ and but also $M_p$ because $M_p$ is dependent on $M_r$. 
%Therefore, out three techniques are safe by covering both $M_r$ and $M_p$ that
%cause different behaviors for security policy evolution.
} 
















