\subsection{Discussion}
\label{sub-sec:exp-discussion}

The experiment results revealed an unexpected scenario: the intersection of bugs
detected by both approaches was very low. From 82 bugs, only 6 were duplicated
ones. This mean that less than 10\% of the bugs were detected by both
techniques. 
Furthermore, the metrics for the time spent on testing, the effectiveness, the
precision, and the relative recall had not their null hypotheses rejected.
Thus, we investigated the reason why most of the metrics and hypothesis tests
did not have statistical evidences to identify a better technique.

\subsubsection{Time}
\label{sub-sec:dis-time}

First, we evaluated the time variable.
% Avalicação geral 
From our observations and data (Figure~\ref{fig:time} and
Table~\ref{tbl:result-time}), we noticed that there were few cases were the time
of one technique was discrepant for TaRGeT and ad hoc tests. 
There is not an observable pattern and in some cases one approach is better than
the other, and vice-versa. 
Thus, the time equally comparison is plausible.



The time outliers values helped for the equal time value between the
techniques.
These outliers are from the most complex use cases and cannot be removed from our
analysis.
% Manual
We noticed that, for the ad hoc approach, the more experienced a participant is,
more time he takes at testing an UC, whereas inexperienced participants are
less critical. Thus, the ad hoc outliers are from
experienced participants testing two of the most complex UCs (UC01 and UC04).  
% Target
On the other hand, TaRGeT had a script to be
followed so the experience of the participants was not as relevant as on manual tests.
TaRGeT single outlier value is from the modeling activity of the most complex
use case (UC05) of the whole release.

Respecting outliers and also general observations, it is important to emphasize
the strong correlation between complexity and time among the
techniques. This correlation gives confidence that the extended use case
point metric (EUCP) can be used in order to measure
system's complexity, as complex use cases with a large development time took 
longer to be tested.
Also, despite the weak negative correlation between experience and time, the analysis gives confidence that experient testers could
test or model scenarios in lesser time. 
Hence, we highlight that it is important to consider system's complexity while
adopting a model-based approach or not.


The preparation and execution time observations also drove other questions and
further discussion. For instance, one could argue that after one initial
modeling phase, TaRGeT tests would have a better overall time in comparison with
ad hoc tests since TaRGeT tests are already modeled. However, a process change and
general test suite documentation could be reasoned when the test suite
is generated either on an ad hoc way or based on models. Therefore, we
considered both preparation and execution times in our experiment.



\begin{table}[!t]
\caption{Analysis of correlation per Technique}
\label{tbl:result-correlation-tech}
\centering
\begin{tabular}{c|c|c|c|c|c}
\hline
\multicolumn{3}{c}{\bfseries Ad hoc} & \multicolumn{3}{c}{\bfseries TaRGeT} \\
\hline
\bfseries 
& \bfseries \scriptsize{Experience} & \bfseries \scriptsize{Complexity} & \bfseries 
& \bfseries \scriptsize{Experience} & \bfseries \scriptsize{Complexity} \\
\hline\hline
\bfseries Bugs & 0.06 & 0.56 & \bfseries Bugs & 0.22 & 0.33 \\
\hline
\bfseries Time & -0.08 & 0.50 & \bfseries Time & -0.11 & 0.85 \\
\hline
\end{tabular}
\end{table}


\subsubsection{Defects}
\label{sub-sec:dis-bugs}


Concerning detected defects and the low duplicity of found bugs among the
techniques, we analyzed the particularities of each approach and how the participants
exploit them.
 


First, we observed that there are scenarios in which only the manual approach
could detect defects. 
Even with the TaRGeT proportionally larger bug detection, there were five UCs
(UC03, UC04, UC09, UC12, UC17) where manual tests were better and TaRGeT technique
could not detect any defect. 
Analyzing the description of those defects, we noticed that the ad hoc tests
have some particularities. 

Implicit knowledge from the testers cannot be
extracted in TaRGeT test suites, consequently some defects could not be detected
by this technique. These ad hoc scenarios offset TaRGeT outliers, even when TaRGeT 
was better at the detection of defects in the most complex use cases. 

Likewise, there are scenarios where TaRGeT modeling granularity had a
trade off between time and bug detection. The more time you spend in modeling a
complex use case, the greater is the flow coverage. However this can lead to
hundreds of scenarios, and to an unpractical test script.
% Vantagem manual 
On the other hand, manual tests have more flexibility in such scenarios. 
As system testers gain more knowledge of the test suite, they can diverge more
easily from a test script and try unpredictable scenarios.
% Exemplo
For instance, one of UC12's ad hoc defects describes a scenario where a user
opens a panel, visualizes the information, but rather than confirming it, he goes back
to a previous page and does another action, generating the bug scenario. 


% Contra ponto asa
Despite manual test flexibility, they do not guarantee flow coverage. 
As the complexity of a use case increases, a more structured approach
is more suitable to describe all its test cases.
Considering the correct modeling for complex use cases, TaRGeT detected more
defects in such scenarios (UC01, UC02, UC05). As the complexity of a use
case increases, a manually generated test suite is more susceptible to failures. 
Therefore, TaRGeT automatically generated test suites are better suited for
complex use cases.
For instance, UC01 last exception flow describes an invalid zip code scenario,
but the SUT does not discern valid or invalid zip codes.
As the UC had many scenarios, this flow was unnoticed by ad hoc tests, whereas
TaRGeT explicitly described it.


Regarding the defects types and severity, the tests of proportion indicate that
both techniques can find bugs equally with different severities and types.
However, as TaRGeT tests for the documentation were near a significance level of
10\% ($\alpha {\it = 0.10}$), we believe that with a larger sample the
proportion tends to be greater for the TaRGeT technique. 
We also highlight that despite the equal proportions on documentation defects,
inexperienced testers found documentation defects easily using TaRGeT, whereas
ad hoc documentation defects were found by more experienced participants.
Similarly, system logic defects were near a significance level of 15\%  
($\alpha {\it = 0.15}$). Hence, we argue that with a larger sample the
proportion tends to be greater for the ad hoc approach.


In agreement with the exposed discussion, we believe that the techniques
counterbalancing contributed for their equality and low duplicity of found bugs,
hence not rejecting the effectiveness, precision and recall null hypotheses. As
each technique has advantages and drawbacks, a test approach that could combine advantages of both model-based
testing and ad hoc tests seems promising.


\subsection{Threats to validity}
\label{sub-sec:exp-threats}

In this section we describe the threats to the validity of the conducted
experiment and the way we followed to mitigate them. They are detailed as
follows:
% For a better understanding
%of the threats, they are divide into threats to conclusion, internal,
% construct, and external validity~\cite{Wohlin:2000:ESE:330775}. 

\subsubsection{Conclusion}

The greatest threat to our conclusions is the low statistical power. Since the
sample counted with only one system release and few participants, the low
statistical power of our tests can be a major drawback of the experiment. 
Despite the low statistical power we counted on 54 experimental units, hence our
conclusions are still sound. We will conduct more tests in future releases of the
project, increasing our experiment database for further analysis. With a
larger database, we can reevaluate the tests of proportions and hypotheses tests
as described before. 

\subsubsection{Construct}

The main threat to the construct validity is the interaction of different
treatments: the experience of the participants, and the use cases' complexity.
To mitigate this factor, we analyzed the correlation of the variables as
previously described.

\subsubsection{Internal}

Since the participants gained experience with the ongoing
experiment, the major internal threat is related with history and maturation of
the participants. We controlled the number of tested use cases per
participant, and also techniques used, so this threat could be mitigated.
% There were some issues during the experiment that affected two
% participants, however they were a small portion of the sample,
% so the major conclusions are still applied.

\subsubsection{External}

The principal threats to the external validity are the participants
restriction and the experiment's context.
The experiment was conducted on a real system under development, but it has
particularities that can not be true to other projects. Also, the
participants were not selected freely due to privacy policies. All these issues
are significant threats to generalize the results of our experiment. However,
given the number of empirical studies conducted, our study is still a major
contribution to the literature.