\subsubsection{Defects}
\label{sub-sec:exp-bugs}

In order to analyze the quantity of bugs identified by each technique, we
followed a similar approach to the time metric. First, we manually gathered the data from
the bug tracking tool. Then, along with the project manager, we verified if the
bugs were valid and we checked bug's duplicities. 
We also verified the bugs classification and techniques used, so we
could group them appropriately. 


The whole experiment found a total of 82 new bugs, being 76 different ones and
only 6 duplicated ones. From the 27 UCs, defects were detected on 18 UCs.
% Técnicas
TaRGeT tests found 39 defects, while ad hoc tests found 43 defects.
Table~\ref{tbl:result-bugs-tech} presents these information grouped by
defects types along with their percentages.
% Explicar porcentagens
It is important to highlight that the calculated percentages are from the total 
number of bugs.
Excluding duplicities and considering only distinct bugs, from 76 different
bugs, TaRGeT tests found 52\% of them, while ad hoc tests found 57\%.

\begin{table}[!t]
\caption{Bugs found per Technique}
\label{tbl:result-bugs-tech}
\centering
\begin{tabular}{c|c|c|c|c}
\hline
\bfseries & 
\bfseries \scriptsize{Documentation} &
\bfseries \scriptsize{\centering \bfseries User Interface} &
\bfseries \scriptsize{\centering \bfseries System Logic} &
\bfseries Total \\
\hline\hline
\bfseries Ad hoc & 13 (30\%) & 10 (23\%) & 20 (47\%) & 43 (52\%) \\
\hline
\bfseries TaRGeT & 14 (35\%) & 11  (28\%) & 14 (37\%) & 39 (48\%)\\
\hline
\hline
\bfseries Total & 27 (34\%) & 21  (25\%) & 34 (41\%) & - \\
\hline
\end{tabular}
\end{table}


% \begin{figure*}
% \centering
% \includegraphics[width=\textwidth]{figs/bugs.jpg}
% \caption{a) Box plot comparison of defects found during acceptance tests  
% b) Comparison of defects found per use case on acceptance tests }
% \label{fig:bugs}
% \end{figure*}

% Tipos
Concerning the total of found defects per type,
defects found during the experiment were classified as {\it documentation}, {\it
user interface} and {\it system logic} ones. 
The overall bugs quantity and
percentages per type are also presented in Table~\ref{tbl:result-bugs-tech}. 
% Explicar tabela
Both TaRGeT tests and ad hoc tests found similar percentages for documentation,
and user interface bugs, although there is a slight advantage in detecting
system logic defects on ad hoc tests over TaRGeT. 
Despite the observed percentages, we did
tests of proportion to verify the general case. With {\it p-value}s of 0.14,
0.60 and 0.16, 
we could not reject the hypotheses that the proportions are equal.
Thus, the detected defects per type can be {\bf proportionally equal}
among both techniques, for documentation, user interface, and system logic
defect types, respectively. 


As Table~\ref{tbl:result-bugs-crit} presents, the severity of the detected bugs
are almost equal. The experiment found a total of 2 blocker bugs, 8 critical
bugs and 10 major bugs, hence 25\% of the discovered bugs are major threats to
the SUT. Similarly to the defects per type approach, tests of proportion
verified the general case and with {\it p-values} {$\geq$} {\it 0.61}, we could
not reject the hypotheses that they are equal.

\begin{table}[!t]
\caption{Bugs severity per Technique}
\label{tbl:result-bugs-crit}
\centering
\begin{tabular}{c|c|c|c|c}
\hline
\bfseries & 
\bfseries \scriptsize{Minor} &
\bfseries \scriptsize{Major} &
\bfseries \scriptsize{Critical} &
\bfseries \scriptsize{Blocking} \\
\hline\hline
\bfseries Ad hoc & 31 (71\%) & 7 (16\%) & 4 (9\%) & 1 (4\%) \\
\hline
\bfseries TaRGeT & 31 (78\%) & 3 (7\%) & 4 (10\%) & 1 (5\%) \\
\hline
\hline
\bfseries Total & 62 (74\%) & 10 (12\%) & 8 (9\%) & 2 (5\%) \\
\hline
\end{tabular}
\end{table}

Regarding the overall bug detection per technique, {\bf on average} both
techniques found the same number of defects.
As presented in Figure~\ref{fig:bugs}-A, in mean, both techniques detected at
least one defect per use case.
However, the box plot chart indicates that the quantity of bugs found by the ad
hoc technique varies more than the MBT technique. Therefore, ad hoc bug
detection tends to be more unpredictable whereas TaRGeT more constant. 

% Apresentar teste de Wilcoxon para tempo total
As the average bug detection is equal between both techniques, we
calculated the effectiveness ({\it E}) of each one. Ad hoc tests found a total
of 37 valid bugs in 170 tested scenarios with a calculated effectiveness of 21\%. On
the other hand, TaRGeT tests found a total of 34 valid bugs in 229 scenarios
with 14\% of effectiveness. Ad hoc effectiveness was greater along most use
cases, however in some few ones TaRGeT surpasses it.

\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{figs/bugs.png}
\caption{a) Box plot comparison of defects found during acceptance tests  
b) Comparison of defects found per use case on acceptance tests }
\label{fig:bugs}
\end{figure*}


The effectiveness difference between ad hoc and TaRGeT techniques
was 7\%, therefore we applied a Mann-Whitney Wilcoxon's test to
statistically verify if the ratio between the quantity of bugs found per
technique and tested scenarios ({\it E}) are statistically equivalent.
Thus, from the test {\it p-value} of 0.83, we could not reject the {\it
$H_{\emptyset2}$} hypothesis, which states that
the effectiveness of both techniques are equivalent. 


Regarding the comparison of {\it each use case},
the pairwise technique comparison is presented in Figure~\ref{fig:bugs}-B.
TaRGeT tests detected more bugs in 9 (52\%) use cases while ad hoc tests were
better in 6 (31\%) use cases. Only in 3 UCs (17\%) both techniques found the same
number of defects.

% No geral ad hoc detectou mais
Regarding the overall {\it technique} comparison, the ad hoc technique found a
higher number of bugs per use case, and there are also scenarios where only this approach could detect defects
(i.e. UC03, UC09 and UC15). 
% Também existem cenários discrepantes com Target
On the other hand, there are three outliers on
TaRGeT tests (UC01, UC02 and UC05) that represent three of the most
complex use cases of the release. 
% Conclusão
The contrast of found bugs per use case and technique,  eventually contributed
to an average equality.


Considering the pairwise comparison, the precision and relative recall of each
technique was calculated. 
% precisao
The techniques precision is approximately 87\% and 85\% for ad hoc and TaRGeT
test, respectively. 
% cobertura
Moreover, the calculated relative recall is about 54\% and 46\% for ad hoc and
TaRGeT test.



Intuitively, the precision, or the number of valid bugs ({\it P}), of both
techniques is equal. However the 8\% difference on the relative recall metric
led to another Mann-Whitney Wilcoxon's test. 
Statistically verifying if the ratio between the number of bugs found per
technique and the total number of bugs ({\it R}), 
we could not identify a technique with greater precision ({\it p-value} of 0.17
at a significance level of 5\%).


Since the quantity of duplicated defects was low,
there is neither a technique with a higher precision nor relative recall than
the other one. Thus, neither the null hypotheses {\it $H_{\emptyset3}$} nor
the null hypotheses {\it $H_{\emptyset4}$} could be rejected.



For the sake of brevity, the effectiveness and the relative recall metrics were
discussed using only the valid bugs, as stated in
Equations~\ref{eq_effectiveness} and~\ref{eq_relative_recall}, respectively.
However, both metrics were also evaluated using all valid and invalid bugs
(which represents 13\% of detected bugs) and similar results were obtained.



% Conclusões
From our experiment results, it is possible to state that concerning the
effectiveness, precision and the relative recall of each technique, both the ad
hoc and the TaRGeT approaches {\it roughly} detect the same amount of bugs.
Although, it is important to highlight that we do not have statistical evidence
to confirm it.
