\section{Experiments}\label{sec:experiment}

We conducted experiments for evaluating our proposed
techniques of regression-test selection. 
We carried out our experiments on a PC, running Windows 7 with Intel Core i5, 2410 Mhz processor, and 4 GB of RAM.
As experimental subjects, we collected three Java programs \cite{mouelhi09:tranforming} each
interacting with policies written in XACML.
The Library Management System (\CodeIn{LMS}) provides web services to borrow/return/manage books in a library.
The Virtual Meeting System (\CodeIn{VMS}) provides web conference services to organize online meetings.
The Auction Sale Management System (\CodeIn{ASMS}) provides web services to manage online auction.
These three subjects include 29, 10, and 91 security test cases, which target
at testing security checks and policies. The test cases cover 100\%, 12\%, and 83\% of
42, 106, and 129 rules from policies in \CodeIn{LMS}, \CodeIn{VMS}, and \CodeIn{ASMS}, respectively.


\textbf{Instrumentation.}
We implemented a regression simulator, which injects any number of policy changes
based on three predefined regression types.
\CodeIn{RMR} (Rule Removal) removes a randomly selected rule.
\CodeIn{RDC} (Rule Decision Change) changes the decision of a randomly selected rule.
\CodeIn{RA} (Rule Addition) adds a new rule consisting of attributes randomly selected among attributes collected from $P$.
Combination of the three regression types can incur various policy changes.

%Among the three regression types, \CodeIn{RA} is the most flexible
%because a rule consists of any combination of randomly selected attributes.
For our experiments, the regression simulator injects 5, 10, 15, 20, and 25 policy changes, respectively.
Our experiments are repeated 12 times to avoid the impact of randomness of policy changes.
We measure effectiveness and efficiency of our three techniques by
measuring test-reduction percentage, the number of fault-revealing test cases, and elapsed time.



\begin{table*}[htbp]
  \centering
  \caption{The number of selected test cases on average for each policy group by each technique.}
  	%\vspace{-8pt}   
    \begin{tabular}{|l|r|r|r||r|r|r||r|r|r||r|r|r||r|r|r|}
%\multirow{2}{*}{Subject} & \multicolumn{3}{c}{Regression - 5} & \multicolumn{3}{c}{Regression - 10} & \multicolumn{3}{c}{Regression - 15} & \multicolumn{3}{c}{Regression - 20} & \multicolumn{3}{c}{Regression - 25} & \\
\hline
     \multirow{2}{*}{Subject} & \multicolumn{3}{|c||}{Regression - 5} & \multicolumn{3}{|c||}{Regression - 10} & \multicolumn{3}{|c||}{Regression - 15} & \multicolumn{3}{|c||}{Regression - 20} & \multicolumn{3}{|c|}{Regression - 25} \\\cline{2-16}
       &\#S$_{M}$& \#S$_{C}$ & \#S$_{R}$ &\#S$_{M}$& \#S$_{C}$ & \#S$_{R}$ &\#S$_{M}$& \#S$_{C}$ & \#S$_{R}$ &\#S$_{M}$& \#S$_{C}$ & \#S$_{R}$ &\#S$_{M}$& \#S$_{C}$ & \#S$_{R}$
       \\\hline\hline
%    LMS   & 5.0   & 2.4   & 48.3  & 9.0   & 4.8   & 52.8  & 13.0  & 6.3   & 48.1  & 17.4  & 7.8   & 44.5  & 20.7  & 8.8   & 42.7 \\\hline
%    VMS   & 4.9   & 0.4   & 8.5   & 9.3   & 0.7   & 7.2   & 13.8  & 1.3   & 9.6   & 18.9  & 1.1   & 5.7   & 23.8  & 1.8   & 7.4 \\\hline
%    ASMS  & 5.0   & 1.9   & 38.3  & 9.8   & 4.8   & 48.7  & 14.4  & 7.3   & 50.9  & 18.7  & 9.4   & 50.4  & 23.3  & 12.1  & 51.8 \\\hline\hline
%    Average & 4.97  & 1.58  & 31.71 & 9.33  & 3.39  & 36.23 & 13.75 & 4.97  & 36.19 & 18.33 & 6.08  & 33.56 & 22.58 & 7.56  & 33.97 \\\hline

    LMS   & 4.7   & 4.7   & 4.5   & 11.0  & 11.0  & 9.5   & 12.9  & 12.9  & 10.2  & 14.8  & 14.8  & 13.8  & 16.8  & 16.8  & 14.6 \\\hline
    VMS   &0.1   & 0.1   & 0.1   & 0.4   & 0.4   & 0.2   & 1.2   & 1.2   & 0.8   & 1.6   & 1.6   & 1.2   & 1.8   & 1.8   & 1.1 \\\hline
    ASMS  & 6.6   & 6.6   & 5.9   & 10.9  & 10.9  & 10.0  & 16.4  & 16.4  & 14.8  & 21.3  & 21.3  & 19.3  & 22.4  & 22.4  & 17.2 \\\hline\hline
    Average &3.8   & 3.8   & 3.5   & 7.4   & 7.4   & 6.6   & 10.2  & 10.2  & 8.6   & 12.6  & 12.6  & 11.4  & 13.7  & 13.7  & 11.0 \\\hline

    \end{tabular}%
  \label{tab:cov-results}%
\end{table*}%



% Table generated by Excel2LaTeX from sheet 'Performance'

\textbf{Research questions.}
We intend to address the following research questions:
\vspace{-5pt}
\begin{itemize}
	\item RQ1: How high percentage of test cases (from an existing test suite) are reduced by our test-selection techniques? This question helps show that our techniques can 
	reduce the cost of regression testing.
\vspace{-5pt}	
	\item RQ2: How high percentage of selected test cases can reveal regression faults? This question helps show that our techniques can effectively select fault-revealing test cases.
\vspace{-5pt}
	\item RQ3: How much time do our techniques take to conduct test selection? This question helps compare performance of our techniques by measuring their efficiency.
				
\end{itemize}

\Comment{
To help answer these questions, we collect few test metrics to show the effectiveness and the efficiency of our test-selection techniques and
our test-augmentation technique. 
The following metrics are
measured for each subject under test interacting with each modified policy
and each technique.
\begin{itemize}
%	\item \textit{Selected test case count}.  The test count is the size of the request set or
%the number of tests generated by the chosen test-generation
%technique. For testing access control policies, a test is synonymous with request
	\item \textit{The number of selected test cases ($\# S$).}  Given a policy and its modified
	policy, this metric shows the number of selected test cases by each technique.
	\item \textit{Test reduction percentage ($\% TR$).}  Given a policy and its modified
	policy, the test reduction percentage is the number of selected test cases for regression testing divided by the number of security test cases.
	\item \textit{The number of fault-revealing test cases ($\# FS$).}  Given a policy and its modified
	policy, this metric shows the number of fault-revealing test cases out of selected test cases by each technique.	

	\item \textit{Elapsed time.}  The elapsed time is time (measured in milliseconds) elapsed for each step during the test-selection process.
	\item \textit{Augmented test case count.}  The augmented test case count is the number of augmented test cases by each augmentation type.
	
\end{itemize}

	To assess effectiveness of test reduction (RQ1), we use ``$\# S$'' and ``$\% TR$''.
	To assess effectiveness of regression fault detection (RQ2), we use  ``$\# FS$''.
	To assess efficiency of test selection (RQ3), we use elapsed time.
	To assess effectiveness of test augmentation (RQ4), we use a augmented test case count metric.
}	
	 


\textbf{Results.}
To answer $RQ1$, we measure test-reduction percentage ($\% TR$), which is
the number of selected test cases divided by the number of existing security test cases.
Table~\ref{tab:cov-results} shows the number of selected test cases on average for each technique.
``Regression - $m$'' denotes a group of modified policies where $m$ is the number of policy changes on $P$.
``$\#S_{M}$'', ``$\#S_{C}$'', and ``$\#S_{R}$'' denote the number of selected test cases on average
by our three test-selection techniques, one based on mutation analysis ($TS_{M}$),
one based on coverage analysis ($TS_{C}$), and one based on recorded request evaluation ($TS_{R}$), respectively.
We observe that $TS_{R}$ selected a fewer
number of test cases than the other two techniques.
The reason is that, while $TS_{M}$ and $TS_{C}$ select test cases 
based on syntactic difference,
$TS_{R}$ selects test cases based on actual policy behavior changes (i.e., semantic policy changes).
As illustrated in Section~\ref{sec:approach}, syntactic difference may not
result in actual policy behavior changes.
%(e.g., a newly added rule is overwritten by existing rules).
 

%\textbf{We rerun all of test cases and verify that our three techniques are safe test-selection techniques.}


Figure~\ref{fig:reduction} shows the results of test-reduction percentage for our three subjects
with modified policies.
LMS1 (LMS2), VMS1 (VMS2), and ASMS1 (ASMS2) show test-reduction percentages for our three subjects, respectively, using TS$_{M}$ and TS$_{C}$ (TS$_{R}$).
We observe that our techniques achieve 42\%$\sim$97\% of test reduction for our subjects with 5$\sim$25 policy changes.
Such test reduction reduces a substantial cost in terms of test-execution time
for regression testing.

%We also observe that all of our test techniques show the same
%set of selected test cases. This observation shows that all of our techniques are
%effective to select every test case impacted by policy changes.

%Upon further examination
%of the results, we find correlation between policy coverage and test reduction.
%We notice that \CodeIn{VMS} often shows the highest test reduction compared to \CodeIn{LMS} and \CodeIn{ASMS}.
%While the policy $P_{vms}$ used in \CodeIn{VMS} consists of 106 rules, $P_{vms}$ achieves low policy coverage.
%$P_{vms}$ achieves only 12\% of policy coverage with 10 security test cases.
%This indicates that more than 90 rules are not covered. Due to its low policy coverage, there would be low probability
%to find test cases (among the existing test cases) for covering impacted rules by policy changes.
%On the contrary, the policy $P_{lms}$ used in \CodeIn{LMS} achieves high policy coverage and the lowest
%test reduction percentage. Therefore, $P_{lms}$ may find more test
%cases to exercise impacted rules than $P_{vms}$. 

 


To answer $RQ2$, we show the percentage of selected test cases that reveal regression faults.
Detection of regression faults is dependent on the quality of test oracles in test cases.
The test cases for our three subjects include test oracles, which check correctness of decisions
evaluated for all the requests issued from PEPs.
Therefore, selected test cases by TS$_{R}$ would all detect regression faults (caused by semantic policy changes).
On average, the percentages of selected test cases that reveal regression faults are 87\%, 87\%, and 100\% for our three techniques TS$_{M}$, TS$_{C}$, and TS$_{R}$, respectively
 
%Table~\ref{tab:cov-results} shows that the numbers of the selected test cases are
%3.5, 6.6., 8.6, 11.4, and 11.0 on average for our subjects with 5, 10, 15, 20, 25 policy changes, respectively.


%we observe that 
%fail for testing and contribute to
%detect regressions faults (caused by semantic policy changes).
%The number of 
%3.5, 6.6., 8.6, 11.4, 11.0 
%For our subjects, all of test cases selected by TS$_{R}$ detect regression faults.
%when policy changes introduce different system behaviors (reflected by semantic policy changes), our selected test cases by TS$_{R}$ fail for testing. For our subjects, all of test cases selected by TS$_{R}$ detect regression faults.


%, which developers inspect for fixing.
%
%3.5$\sim$11.4 regression faults (on average), which are introduced by given 5$\sim$25 policy changes.

%consider to fix in either test code or policies.
%As we created augmented test cases based on semantic policy changes, requests (\CodeIn{RI})issued from the augmented test cases
%show different evaluation results. 
%Note that tests cases in our subjects always check the correctness of evaluation results for \CodeIn{RI}.
%When we use test oracles in the existing test cases for the augmented test cases, these test oracles always reveal regression faults (reflected by different evaluation requests for \CodeIn{RI}) Developers need to inspect such cases and modify application code or test oracles.




\begin{figure}[t]
    \centering
        \includegraphics[width=2.5in]{reduction.eps}
        \vspace{-5pt}
    \caption{\label{fig:reduction}
    LMS1 (LMS2), VMS1 (VMS2), and ASMS1 (ASMS2) show test-reduction percentages for our subjects with modified policies, respectively, using TS$_{M}$ and TS$_{C}$ (TS$_{R}$).
    Y axis denotes the percentage of test reduction. X axis denotes the number of policy changes on our subjects.}
    \vspace{-10pt}
%    \vspace{+3pt}
\end{figure}

\begin{table}[t]
  \centering
  \caption{Elapsed time (millisecond) for each test-selection technique, and each policy.}
  	%\vspace{-8pt}   
    \begin{tabular}{|l|r|r||r|r||r|r|}

			\hline
       \multirow{2}{*}{Subject}   & \multicolumn{2}{|c||}{$TS_{M}$} & \multicolumn{2}{|c||}{$TS_C$} & \multicolumn{2}{|c|}{$TS_R$} \\\cline{2-7}
          & \multicolumn{1}{|c|}{Cor} & Sel & \multicolumn{1}{|c|}{Cor}  & Sel & \multicolumn{1}{|c|}{Col} & Sel  \\\hline\hline
		LMS   & 70,496   & 4     & 5,214   & 4     & 2,096  & 2 \\\hline
    VMS   & 19,771   & 1     & 7,506    & 1     & 1,873  & 2 \\\hline
    ASMS  & 118,248   & 11    & 22,423   & 11    & 1,064  & 21 \\\hline\hline
    Average & 69,505   & 5     & 11,714   & 5     & 1,678  & 8 \\\hline    
    \end{tabular}%
  \label{tab:performance-results}%
\end{table}%

To answer $RQ3$, we measure elapsed time of conducting test selection.
The goal of this research question is to compare efficiency of our three
test-selection techniques.
Table~\ref{tab:performance-results} shows the evaluation results.
For TS$_{M}$ and TS$_{C}$, the results show the elapsed time of
correlation (``Cor'') and test
selection (``Sel''), respectively. For TS$_{R}$, the results show the elapsed time of
request recording (``Col'') and test
selection (``Sel''). 
We observe that correlation (11,714 milliseconds on average) of TS$_{C}$ takes
substantially less time than correlation (69,505 milliseconds on average) of TS$_{M}$.
%$e_1$ and $e_2$ take 11,714 milliseconds and 69505 milliseconds on average, respectively.
The reason is that TS$_{C}$ executes the existing test cases only once
but TS$_{M}$ executes the existing test cases for 2$\times$$n$ times where $n$
is the number of rules in a policy under test.
For total elapsed time by each technique, 
we observe that the total elapsed time of TS$_{R}$ is 43 and 8 times
faster than that of TS$_{M}$ and TS$_{C}$, respectively.

 
\textbf{Threats to validity.}
The threats to external validity primarily include the degree to
which the subject programs, the policies, and regression model are
representative of true practice. These threats could be reduced by
further experimentation on a wider type of policy-based software
systems and a larger number of policies. The threats to internal validity
are instrumentation effects that can bias our results such as
faults in the PDP, and faults in our implementation.



