
\section{Conclusions}
\label{sec:conclusions}


In this paper we described an empirical experiment comparing manual ad hoc tests
and model-based TaRGeT tests. In order to analyze these techniques, we
controlled factors such as the experience of the participants, and the use cases'
complexity and then, we analyzed time, effectiveness, precision, and
relative recall metrics in order to observe their performances.


We concluded that the techniques have significant particularities, even though
the obtained results could not identify differences between them.
Manual tests give more freedom to testers, as unpredictable scenarios
can be tested more easily. Thus, this approach tends to be better to detect
system logic defects. On the other hand, model-based tests are better at evaluating the 
system documentation, and they are also better suited for complex use cases,
since they have more flows and possible flow associations. In this context, the 
automatically generated output can cover all possible scenarios more
precisely.


This study gives insight that a hybrid technique can be promising. Considering
the discussed characteristics and the low duplicity of found defects, we state
that the techniques are complementary, and if the right concepts of each one are
applied, a major defect detection can be achieved. 
Another possible alternative regards MBT test
case selection. As complex use cases can have hundred generated scenarios,
selecting the minimal test suite in order to maximize flow coverage 
would greatly improve TaRGeT usage. 


As future work, we plan a larger experiment in different projects with different
model-based testing tools to study each tool's performance. In addition, we
intend to combine both ad hoc manual tests and model-based tests.