
\section{Introduction}
\label{sec:intro}

%O que sao testes de sistema e por que é importante
Testing is an important activity on the software development process. It
is responsible for showing that a program does (or does not) what it is intended
to do and for discovering program defects before the software is put into real
use.
To detect such defects it is necessary to plan and develop test
scenarios for the System Under Test (SUT), which are extracted from a
model.

%Descever um modelo do sistema e sua abstração
Models are systems' abstract representation and can be explicit or not.
%Como a atividade é conduzida - normalmente
Usually, models used to generate a test suite are the product of
software testers implicit knowledge. Although, such approach can not guarantee
an uniform test suite generation and the system is usually tested on an ad hoc manual way.
%Como a atividade é conduzida - com modelos
If the system model is described explicitly, reflecting the system behavior
accurately, it can be used as a shareable and reusable artifact. For instance,
the model can be used to generate an appropriate test suite for the SUT.
Such approach is called model-based testing (MBT)~\cite{Apfelbaum97modelbased}
and benefits of this technique have been previously
acknowledged~\cite{Dalal99model-basedtesting,
5386905, Pretschner:2005:OEM:1062455.1062529}.

% %Explicar diferentes ferramentas que implementam MBT
% In order to automatically generate test suites, there are many different
% model-based testing tools which
% guide the test generation
%  process. Among these tools, TaRGeT~\cite{Ferreira:Target:CBSOFT} emerge as a
%  promising tool, %~\ref{REF}
% providing features such as automatic generation of test suites from use case
% scenarios, use of a Controlled Natural Language (CNL), and test case
% selection. 


% Discutir que existem poucos trabalhos na literatura que comparam empiricamente
% tecnicas de testes tradicionais e MBT
Despite the increasing research efforts on model-based testing, to the best of
our knowledge, there are few contributions comparing this approach with
traditional manual and ad hoc tests. 
Most successful reported results are from industry case
studies~\cite{Dalal99model-basedtesting, 5386905,
Grieskamp:2011:MQA:1952602.1952605} and indeed few works have empirically compared both
approaches~\cite{Pretschner:2005:OEM:1062455.1062529}.
Moreover, most of those studies have considered code automation of generated
tests, 
precluding that someone  can guarantee that an 
undetected defect is due to the technique used or 
to a fault in the automation process.


% Introduzir o nosso trabalho
In order to contribute to the state-of-art comparing MBT tests with manual and
ad hoc tests, we conducted an empirical study comparing one MBT tool approach
(TarGeT~\cite{Ferreira:Target:CBSOFT}) 
with a manual ad hoc test approach. 
Both approaches were tested regarding acceptance testing. 
% Descrever brevemente o experimento empírico conduzido
We defined metrics to measure the performance of each technique (regarding
the quantity of detected defects) and time spent on each technique. Furthermore,
we analyzed the correlation of these variables with factors such as the testers'
experience, the use case complexity, and the types of bugs found.

% Apresentar o sistema no qual os testes foram executados
Since testing large complex systems is challenging~\cite{479366} and different
test approaches can be applied according to organizations' budget and deadlines,
both techniques were applied considering a real project under development for
the Federal Police of Brazil\footnote{Names and major details of this system are
omitted due to privacy policies.}. 
Such system aims at supporting the process and access to information
coming from police investigations, as well as at improving the performance and
management of police activities.
% Outras motivacoes
The experiment was also motivated by management decisions. Comparing current
project's ad hoc test approach with a MBT approach, we could reason which test
is more suitable for the project. 
For instance, considering simple and complex requirements specifications, could
MBT tests detect defects in scenarios not predicted by ad hoc tests? Is the time
effort worth using a MBT technique?  


The studied system considered an early internal release, which has more
than 70 use cases and approximately 60KLOC, divided in more than 30 packages, 
600 classes and 4700 functions. 
From our experiment, we found a total of 82 defects in the project's first
internal release. Ad hoc tests detected 43 defects, while TaRGeT tests detected
39 defects.
Thus, both techniques nearly found the same amount of bugs, and are
{\it roughly} statistically equivalent.
Regardless of their equally bug detection, the quantity of duplicated defects
found by both techniques was low (less than 10\%). Hence, we verified the
particularities of each approach, and what type of defects they found.


% Mostrar brevemento nossas conclusões
Our major contribution in this paper is an empirical study comparing
a manual approach and a MBT approach using TaRGeT. 
Concerning the studied approaches, our conclusions using MBT in acceptance testing 
assures general model-based testing theories, which states that model-based test
suites detect {\it roughly} the same number of defects as hand-crafted test suites. 
Nevertheless, we highlight each approach's advantages and in which tests
they are better suited.


% Our major conclusions using MBT in the system testing phase 
% assures model-based testing theories, which states that model-based test suites detect roughly the
% same number of defects as hand-crafted test suites. 
% Nevertheless, we highlight each approach's advantages and in which system tests
% they are better suited.


% Explanar como o restante do documento está estruturado.
The remainder of the paper is organized as follows: Section~\ref{sec:context}
gives more details about how ad hoc and TaRGeT tests were conducted.
Section~\ref{sec:experiment} details the experiment and its results.
Section~\ref{sec:related-work} compares our research with related
works, whereas Section~\ref{sec:conclusions} states some final remarks 
and future works.

