\subsection{Experiment Definition}
\label{sub-sec:exp-definition}

We have structured the experiment definition using the goal, question, metric
(GQM)~\cite{Wohlin:2000:ESE:330775} approach in order to collect and analyze
meaningful data to measure the testing techniques. The experiment definition is described as
follows:

{\bf To analyse} manual and MBT test approaches {\bf in order} to compare ad hoc
tests and TaRGeT tests {\bf according to} their performance and effectiveness
from the {\bf point of view} of system testers {\bf in the context of} a real
system under development.
 


\subsection{Experiment Planning}
\label{sub-sec:exp-design}

% Qual � a hip�tese de nosso experimento
We want to compare two approaches regarding acceptance testing: an ad hoc manual
test (AD) and a MBT using TaRGeT (TR). The metrics, which are detailed
further in this section, are related with effectiveness, precision, relative
recall and time spent during the tests execution of each technique.
 
\subsubsection{Hypotheses} 
 
According to the metrics shortly presented, we describe the following
statistical hypotheses:

Our first assumption is that the time spent on preparation and execution of the
test suites by both approaches are equal:
\begin{equation}
H_{\emptyset1}  : \text{time}AD = \text{time}TR
\end{equation}
\begin{equation}
H_{A1} : \text{time}AD \neq \text{time}TR
\end{equation}

Our second assumption is that the effectiveness of both approaches
are also equal:
\begin{equation}
\label{eqn_time_h0}
H_{\emptyset2}  : \text{effectiveness}AD = \text{effectiveness}TR
\end{equation}
\begin{equation}
H_{A2} : \text{effectiveness}AD \neq \text{effectiveness}TR
\end{equation}

Our third assumption is that the precision of both approaches are equal:
\begin{equation}
\label{eqn_time_h0}
H_{\emptyset3}  : \text{precision}AD = \text{precision}TR
\end{equation}
\begin{equation}
H_{A3} : \text{precision}AD \neq \text{precision}TR
\end{equation}


Our last assumption is that the relative recall of both approaches
are also equal. 
\begin{equation}
\label{eqn_time_h0}
H_{\emptyset4}  : \text{recall}AD = \text{recall}TR
\end{equation}
\begin{equation}
H_{A4} : \text{recall}AD \neq \text{recall}TR
\end{equation}

If we reject the null
hypothesis for either time, effectiveness, precision or relative recall, we will
further investigate which technique has better performance. 

\subsubsection{Controlled Factors}

In order to test our listed hypotheses, we need to
control some factors to avoid bias.

% Quais s�o os fatores de nosso experimento
% Complexidade dos casos de uso - defici��o de UCP
The first factor is related to the {\bf system complexity}. We
have 27 use cases  with different complexities from the project's first
internal release.
To classify the use cases' complexity, we used a variant of the use case
points metric (UCP)~\cite{Diev:2006:UCM:1218776.1218780}. A UCP is a number,
calculated from different factors present in a use case. The more actors, and
possible actions, a use case has, the greater its UCP value will be. 
 
 
% Adi��o de m�tricas no UCP para refletir real UC
We extended the UCP metric
adding the number of flows (main, alternative and exception) flows, and
the number of related UCs.
This gave us a better insight of the project's real UC development time, and the
calculated complexity. Therefore, our variant named extended use case points
metric (EUCP), is a normalized value between 0 and 1, where a value near 0
indicates low complexity UCs, and a value near 1 indicates high complexity ones. 

\scriptsize
\begin{table}
\caption{Sample project calculated UCPs}
\label{tbl:ucp-points}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
\parbox[c][0.6cm]{1cm}{\centering \bfseries Use Case} &
\parbox[c][0.6cm]{1cm}{\centering \bfseries Possible Actions} &
\parbox[c][0.6cm]{0.6cm}{\centering \bfseries Flows} & 
\parbox[c][0.6cm]{1.5cm}{\centering \bfseries Alternative Flows} &
\parbox[c][0.6cm]{1cm}{\centering \bfseries Related UCs} &
\parbox[c][0.6cm]{0.6cm}{\centering \bfseries EUCP }\\
\hline\hline
\bfseries UC05 & 20 & 9  & 7 & 3  & 0.94\\
\hline
\bfseries UC01 & 12 & 11  & 6 & 2  & 0.75\\
\hline
\bfseries UC02 & 10 & 6  & 3 & 1 & 0.50\\
\hline
\bfseries UC09 & 8 & 4  & 2 & 2 & 0.41\\
\hline
\end{tabular}
\end{table}
\normalsize


Table~\ref{tbl:ucp-points} presents some use cases extracted from UCs used in
this experiment and their EUCPs.
We can note that the more actions, alternative flows and relations with other UCs it has,
the greater its EUCP value is. The use cases were classified as {\it simple}
 or {\it complex} accordingly to a threshold $\tau = 0.50$.
 

 
% Selecao dos participantes
The second factor under control is the {\bf participants experience}, once
different background knowledge and testing skills can influence our metrics. 
We could not select participants freely as we had privacy policies, 
so we had to classify the participants
experience and block the gathered data accordingly to their knowledge.

% Experi�ncia dos participantes
We counted on 12 participants, distributed in the project development,
management, and also researchers. To classify their experience, we used a
questionnaire asking their programming and testing skills and also their
knowledge about the SUT. From this questionnaire, we classified 7
participants as {\it novice}, and 5 as {\it experienced}.

\subsubsection{Design}

% Type of the experiment as
Since we have two testing approaches, two use case complexity levels and 12
participants, we opted to use a Latin Square Design~\cite{Wohlin:2000:ESE:330775} 
to control the participants experience level and UCs
complexity factors. We disposed the subjects in the rows, and the complexity in
the columns applying each technique once per subject and use case. 
Later, we blocked and analyzed the results according to the participants
experience.
With this approach, we counted on 54 experimental units and applied
principles such as randomization, blocking, and balancing~\cite{Wohlin:2000:ESE:330775},
where each participant tested simple and complex use cases per technique. 
Table~\ref{tbl:experiment-design} presents a simple layout of the experiment.


\begin{table}
\renewcommand{\arraystretch}{1.3}
\caption{Layout of Experiment Design}
\label{tbl:experiment-design}
\centering
\begin{tabular}{c|c|c}
\hline
\bfseries & \bfseries Simple use case & \bfseries Complex use case\\
\hline\hline
\bfseries Participant 1 & AD & TR\\
\hline
\bfseries Participant 2 & TR & AD\\
\hline
\ldots & \ldots & \ldots\\
\hline
\bfseries Participant 11 & AD & TR\\
\hline
\bfseries Participant 12 & TR & AD\\
\hline
\end{tabular}
\end{table}

\subsubsection{Execution}

% Setup
Before the experiment execution, a pilot project was planned and the
participants were trained.
The training consisted of explaining how to generate a test suite based on an
use case, how to model a use case on TaRGeT, how to execute acceptance testing
using both manual and MBT approaches and, finally, how to report bugs on the bug
tracking tool.
Hence, they could get familiar with both approaches
and the experiment environment, so a learning bias could be avoided.

%Execu��o
Since the participants were already familiar with the project bug tracking tool
(Bugzilla\footnote{http://www.bugzilla.org/}), they were instructed to report
discovered defects in it. 
We created additional fields on the bug report formulary to record information
of the tested UC, and technique used. Then, we gathered the quantity of bugs found
per use case ({\it $N_{Bugs}$}) and the number of scenarios tested ({\it
$N_{Scenarios}$}). 


The reported bugs were classified
accordingly with the IEEE Standard Classification for Anomalies~\cite{5399061}. 
With respect to this classification, the following types are used to classify
the detected defects.

\begin{itemize}
  	\item Documentation: defects detected on the use cases' documentation. 
For instance, a reported bug states that after successfully subscribing a person, the system
shows two messages, whereas the use case describes only one;

	\item User Interface: defects found on the system's interface.
Bugs in this category describe scenarios such as when a message should be displayed to
the user but it is not or when the documentation states that a button or a
display has a given name, but on the interface they have other ones;

	\item System Logic: defects originated from system wrong processing. 
For instance,  reported bug in this category states that the system does not
validate user data in a given form, whereas it should validate, thus registering
invalid users.

\end{itemize}

The bugs severity were also classified. The severity of a defect can be
described as the highest failure impact that the defect could (or did) cause, as
determined by (from the perspective of) the organization responsible for software
engineering~\cite{5399061}. Thus, a defect can be categorized as blocking,
critical, major or minor according to the impact of the bug
on the SUT. 

\begin{itemize}
  	\item Blocking: Testing is inhibited or suspended pending correction or identification of
		suitable workaround;

	\item Critical: Essential operations are unavoidably disrupted, safety is jeopardized,
		and security is compromised;

	\item Major: Essential operations are affected but tests can proceed;

	\item Minor: Nonessential operations are disrupted;


\end{itemize}



\subsubsection{Metrics}

%M�tricas
With the collected data, we defined the following metrics for later analysis:



\begin{itemize}
  \item Total time ({\it $T_T$}): total time spent per technique on the tests of
  each use case. The total time is the sum of the preparation ({\it $T_P$}) and
  execution ({\it $T_E$}) times. 
  During the experiment execution, we gathered data about the time spent on the
  preparation of the test suites ({\it $T_{P}$}). This time is related with
  scenario configuration, execution of setup scripts, and creation of the test
  suite scenarios. For the MBT approach, this time was also related with the
  modeling activity. Later, the scenarios were executed and we gathered the
  execution time ({\it $T_{E}$}). 
  \begin{scriptsize}
  \begin{equation}
  \label{eq_time}
	T_{T} = T_{P} + T_{E}
  \end{equation}
  \end{scriptsize} 
  
  \item Effectiveness ({\it E}): the ratio between the number of valid bugs and
  tested scenarios per technique. The effectiveness is calculated from the
  number of detected bugs and generated scenarios with each technique. A valid bug is
  a bug that needs to be fixed. In other words, it was not mistakenly reported
  by the tester, thus later turning into an invalid or rejected bug.
  \begin{scriptsize}
  \begin{equation}
  \label{eq_effectiveness}
	E(\text{technique}) = \frac{N_{\text{validBugs}}(\text{technique})}
	{N_{\text{Scenarios}}(\text{technique})}
  \end{equation}  
  \end{scriptsize}
  
    
  \item Precision ({\it P}): the ratio between the number of valid bugs found
  per technique and the total number of bugs (valid or not) found per technique. 
  \begin{scriptsize} 
  \begin{equation}
  \label{eq_precision}
	P(\text{technique}) =
	\frac{N_{\text{validBugs}}(\text{technique})}
	{N_{\text{Bugs}}(\text{technique})}
  \end{equation} 
  \end{scriptsize}
  
  
  \item Relative Recall ({\it R}): the ratio between the number of valid bugs
  found per technique and the total number of valid bugs found for the use case.
  Since we cannot prove that the system has a given number of defects, we used relative
  recall~\cite{Clarke:recall} instead of the recall metric. The relative recall
  is calculated by the fraction of valid defects per technique and the sum of
  distinct valid defects found by both techniques.
  \begin{scriptsize}
  \begin{equation}
  \label{eq_relative_recall}
	R(\text{technique}) =
	\frac{N_{\text{validBugs}}(\text{technique})}{N_{\text{validBugs}}(AD) +
	N_{\text{validBugs}}(TR)}
  \end{equation} 
  \end{scriptsize}  
  
  
\end{itemize}

