\section{Introduction} \label{sec:introduction}
 
Access control is one of the privacy and security mechanisms for granting only legitimate users with access to critical information. 
Access control is governed by security policies such as access control policies (policies in short), each of which includes a sequence of rules to specify 
which subjects are permitted or denied to access which resources under which conditions.
To facilitate specifying policies, 
system developers often use policy specification languages such as XACML~\cite{oasis05:xacml}.
%, which helps
%specify and enforce policies separately from actual functionality (i.e., business logic) of a system.

With the change of security requirements, developers may modify policies to comply with the requirements.
After the modification, it is important to validate and verify the given system to determine that
this modification is correct and does not introduce unexpected behaviors (i.e., regression faults).
Consider that the system's original policy $P$ is replaced with a modified policy $P'$.
The system may exhibit different system behaviors
affected by different policy behaviors (i.e., given a request, its evaluated decisions against $P$ and $P'$, respectively, are different)
caused by the policy changes.
Such different system behaviors are ``dangerous'' portions where regression faults could be exposed.% The developers are required to validate the ``dangerous'' portions.
%System behaviors are affected by different policy behaviors (i.e., given a request, its evaluated decisions against $P$ and $P'$, respectively, are different).

%Different system behaviors are dependent on different policy behaviors (i.e., given a request, its evaluated decisions against $P$ and $P'$, respectively, are different). In order to validate the ``dangerous'' portions efficiently,
%the developers select and execute only test cases (from existing test cases) that exercise the dangerous portions.


%For regression testing, developers consider cost and safeness togehter of selected approach.
In order to validate the dangerous portions with existing test cases, a naive strategy of regression testing is to rerun all existing system test cases.
However, rerunning these test cases could be costly and time-consuming, especially for large-scale systems. Instead of this strategy, developers use regression-test selection
before execution of test cases. This regression-test selection selects and executes only test cases that may expose different behaviors across different versions of the system.
%This regression-test selection may require substantial
%cost to select and execute such system test cases.
If the cost of regression-test selection and selected-test execution is smaller than rerunning all of the initial system test cases, regression-test selection helps reduce overall cost in validating whether the modification is correct. 

\review{In addition to cost effectiveness, safety is an important aspect in regression-test selection. A safe approach of regression-test selection
 selects every test case that may reveal a fault in a modified program~\cite{Rothermel:1996:ART:235681.235682}.
In contrast, an unsafe approach of regression-test selection  may omit test cases
that reveal a fault in the modified program.
} 


In this paper, we propose a safe approach of regression-test selection to select a superset of fault-revealing test cases, i.e., test cases that reveal faults due to the policy modification.
%reduce overall cost in validating whether the modification is correct.
%by analyzing different behaviors in program code due to policy changes.
To the best of our knowledge, our paper is the first one for test selection in the context of policy evolution.
Different from prior research work on test selection~\cite{Rothermel:1996:ART:235681.235682,Graves:2001:ESR:367008.367020}
that deals with changes in program code, our work deals with code-related components such as policies, which
impact system behaviors.



%Among existing test cases, our approach should select superset of the set of test cases (i.e., fault-revealing test cases) that reveal
%faults due to the modification.



%In the policy context, different policy 
%behaviors refer to that, given a request, its evaluated decisions (for $P$ and $P'$, respectively) are different.
%In our notion, we refer to rules $R_{imp}$ impacted by policy changes as rules showing such different policy behaviors due to policy changes.
%These different policy behaviors are dangerous portions to be validated.

Our approach includes three regression-test selection techniques:
the first one based mutation analysis, the second one
based on coverage analysis, and the third one based on recorded
request evaluation.
The first two techniques are based on correlation between test cases and rules $R_{imp}$
where $R_{imp}$ are rules being involved with syntactic changes across policy versions.
%(i.e., which rules are exercised with test cases).
The first technique selects a rule $r_i$ in $P$ and creates $P$'s mutant $M(r_i)$
by changing $r_i$'s decision. 
This technique selects test cases that reveal different policy behaviors by
executing test cases on program code interacting with $P$ and $M(r_i)$, respectively.
Our rationale is that, if a test case is correlated
with $r_i$, the test case may reveal different system behaviors affected by modification of $r_i$ in $P$.
However, this technique is costly because it
requires at least 2$\times$$n$ executions of each test case to find all correlations between
test cases and rules where $n$ is the number of rules in $P$.


%Since this technique is dependent on execution of test cases,
%
%
%semantic difference of policies
%since this technique exeuc


%in terms of test case execution.
%Given the number $n$ of rules in $P$, this technique requires at least
%$n$ execution of test cases against mutants to find all correlation between
%test cases and rules. 

%To find correlation,
%not acutual evaluation, but also.
%
%This technique next uses syntactic difference between $P$ and $P'$
%to select regression test cases.

%The second technique uses coverage analysis
%to find correlation.
%While executing test cases, this technique correlate rules, which can be evaluated (i.e., covered)
%by test cases.
The second technique uses coverage analysis to establish correlations between test cases and rules
by  monitoring which rules are evaluated (i.e., covered) for
requests issued from program code.
Compared with the first technique,
this technique substantially reduces cost during the correlation process
because it requires execution of each test case once.


%However, these first two techniques are based on syntactic difference based on rule level which is not
%accurately reflect on semantic policy changes.



%finds which
%rules are covered (i.e., evaluated) with each test case.
% 
%To reduce
%such correlation setup efforts, we develop a technique to correlate only rules, which can be evaluated (i.e., covered)
%by test cases.
%
% 
%To reduce cost in terms of test case execution, 
%
%Instead of executing test cases in practice, this technique
%is dependent on syntactic difference between $P$ and $P'$.

%for the correlation setup.

%While executing tests cases against $P$, this technique finds correlation between
%test cases and rules by analyzing which rules are covered (i.e., evaluated) with each test
%case. This technique requires test case execution at once.
%However, these first two techniques are based on syntactic difference which is not
%accurately reflect on semantic policy changes.

The third one first captures requests issued from program code while executing test cases. This technique evaluates these requests against $P$ and $P'$, respectively.
This technique then selects only test cases that
issue requests evaluated to different decisions.

%requests $rs$ that produce different
%evaluation decisions, and find $rs$'s corresponding test cases. 

% and
%selects test cases that trigger requests, which reveal different
%policy behaviors.
\Comment{
We evaluate our approach on three Java programs interacting with policies. Our evaluation results show that our test-selection techniques achieve
 up to 51\%$\sim$97\% of test reduction for a modified version with given 5$\sim$25 policy changes.
 Moreover, our evaluation results show that these three techniques are safe and our selected test cases detect 3.5$\sim$11.4 regression faults (on average), which are introduced by given 5$\sim$25 policy changes. Among our three techniques, our results show that the 
third technique is the most efficient compared with the first
and the second techniques in terms of elapsed time.
}

%Our main objective is to, through test-selection, is to detect faults efficiently in the modified
%policy.
%While test-selection techniques are useful to reveal faults efficiently caused by policy changes, these test cases may not 
%\FixJeeHyun{cover all the rules $R_{imp}$ that are impacted by those policy changes}.
%In other words, the selected test cases may fail to cover a rule including a fault in the modified policy, and
%thus fail to detect the fault.



\Comment{
To address this issue, we propose a test-augmentation technique, which complements our test-selection techniques.
\FixJeeHyun{Note that we first use a regression-test-selection technique before using the test-augmentation technique that generates the additional test cases 
that cover rules $\in$ $R_{imp}$ that are not covered by the existing test cases.}
Covering those rules helps investigate a larger portion of changed policy behaviors for fault detection using additional test cases that achieve higher rule coverage.
}



\Comment{
This paper makes three main contributions:

\begin{itemize}
  \item We develop a test-selection approach to select every test case from existing test cases to reveal different policy behaviors. Our approach
  uses three techniques; the first one is based on mutation analysis, the second one is based on coverage analysis, and the third one is based on evaluated 
decisions of requests issued from test cases. 
  \item We develop a test-augmentation technique to generate additional new test cases to cover not-covered rules among rules impacted by policy changes with selected test cases by the preceding techniques.

  \item We evaluate our approach on three real-world Java programs interacting with policies. Our evaluation results show that our test-selection techniques achieve
 up to 51\%$\sim$97\% of test reduction for a modified version with given 5$\sim$25 policy changes.
 Moreover, our evaluation results show that these three techniques are safe and our selected test cases detect 3.5$\sim$11.4 regression faults (on average), which are introduced by given 5$\sim$25 policy changes. Among our three techniques, our results show that the 
third technique is the most efficient compared with the first
and the second techniques in terms of elapsed time.
To increase confidence on correctness of the modified version, our test-augmentation technique generates additional test cases to cover additional 26\% of the not-covered rules impacted by policy changes. We show that these test cases are effective to detect regression faults.
\end{itemize}

The rest of the paper is organized as follows.
Section~\ref{sec:background} presents background information about
policy-based software systems, policy context, and regression testing.
Section~\ref{sec:approach} presents our approach.
Section~\ref{sec:implementation} presents our implementation. 
Section~\ref{sec:experiment} describes the evaluation results
where we apply our approach on three projects.
Section~\ref{sec:discussion} discusses issues. 
Section~\ref{sec:related} discusses related
work. Section~\ref{sec:conclusion}
concludes the paper.
}





%
%[oracles]
