\documentclass[10pt,conference,compsocconf]{IEEEtran}

\usepackage{comment}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{color}


\newcommand{\fs}[1]{{\color{blue} {[FS: #1]}}}

\newcommand\head[1]{\smallskip\noindent\textbf{#1.}}


\author{
 	\IEEEauthorblockN{Andreas Bollin\IEEEauthorrefmark{1},
        Salvatore Campana\IEEEauthorrefmark{2},
 		Luca Spalazzi\IEEEauthorrefmark{3}, and
 		Francesco Spegni\IEEEauthorrefmark{3}} \\

 	\IEEEauthorblockA{\IEEEauthorrefmark{1}Alpen-Adria University of Klagenfurt,
		Klagenfurt, Austria\\
        andreas.bollin@aau.at} \\
 	\IEEEauthorblockA{\IEEEauthorrefmark{2}Computer VAR ITT,
		Verona, Italy\\
 		s.campana@computervaritt.it} \\
	\IEEEauthorblockA{\IEEEauthorrefmark{3} Universit\`a Politecnica delle Marche,
 		Ancona, Italy\\
 		\{spalazzi,spegni\}@dii.univpm.it }

}

%\title{A Methodology To Build Maintainable-By-Construction Software}
\title{Maintainable-By-Construction - A Case-Study in Successfully Integrating Lightweight Formal Methods}

\begin{document}

\maketitle


\begin{abstract}
Modern software engineering is increasingly demanding software systems that can be
quickly adapted when changes happen in (a) business needs, (b) safety and security
requirements or (c) technology. The \emph{methodology} adopted to develop the system
itself assumes, then, a central role in order to ensure that the software can be maintained
in the most effective way, minimizing the risk of introducing errors. In this paper, a
software development methodology is described to build systems that are
\emph{``maintainable-by-construction''}.
The methodology enforces the engineers to develop a software model using some
\emph{executable} and \emph{semantically well-defined} modeling language.
Linking the model to \emph{concrete actions} (e.g. components written in some
existing programming language, or remote webservices, \ldots), the
resulting software system can be directly executed.
The proposed methodology also aims at minimizing the need for reverse engineering
when a reengineering process is required.
This paper will present a concrete industrial application of the methodology as case study,
in order to underline its benefits with respect to the software maintenance and the reengineering tasks.
\end{abstract}

\section{Motivations}
\label{sec:intro}

Among many others, software engineering principles and practices face the following challenges \cite{Boehm2010,Meyer}:
\begin{itemize}
  \item A lot of systems are socio-technological and it is expected to adapt to requirement changes constantly and immediately;
  \item The software development is distributed (even global);
  \item There is increasing attention on how to certify software quality.
\end{itemize}

Already these three challenges have an impact on the quality of the final software product
and, furthermore, they also influence the tasks of maintaining and eventually reengineering 
a system. Moreover, existing development methodologies are more focused on the \emph{system
construction}, rather than its \emph{maintainability}. On the contrary, the best way to ensure 
better software maintainability is to build a software system by using a development methodology 
that considers maintainability from the very beginning.

Priorities for such a methodology would be to minimize the risk of losing knowledge about the
system architecture, to maximize the effectiveness of communication among the
actors involved in the development process, and to clearly describe the software
system behavior.


All of these challenges, to our opinion, bare a great potential for combining existing process models with formal methods. The first challenge, \emph{change}, is nowadays seen to be natural. It is dealt with (and promoted effectively) in the agile movement [???]. However, due to the lack of formal evaluation techniques and rigorous planning, agile methods did not manage to novate the other existing techniques substantially -- especially when entering the domain of safety-critical systems. A combination of these two worlds could bring the best of both worlds (as mentioned by Wolff [] or Black et al. []).
% Scrum Goes Formal: Agile Methods for Safety-Critical Systems, Sune Wolff, FormSERA 2012, Zurich, Switzerland
% Black, Boca, Bowen, Gorman, Hinchey, Formal versus Agile: Survival of the Fittest? Computer, 42 (9), Sept. 09, pp. 37]45
The second challenge, \emph{distributed} software development, is witnessed by the increasing
number of companies that decide to out-source (part of) their IT projects to developing teams working in different countries that do not share the same native language, nor the time-zone.
An even newer trend is known as crowdsourcing, where people over the internet are called (and
often compete) to give their contribution to a software project \cite{Howe2006,Vukovic2009}.



Regarding the need for \emph{certifiable software}, in the last decades very important IT actors
implemented their own strategies in order to ensure that third-party softwares were secure w.r.t.
a given set of requirements \cite{Woodcock2009a,Ball2004,Security2011}.
This process can be based either on formal or informal methods.
Regarding formal methods, it should be noticed that
even widely recognized standards organizations and big corporations are introducing formal methods along
the software testing phases (see for example the DO-178C standard for avionic software
\cite{RTCA11a,Gigante2012} and the SWEBOK section on Software Engineering Tools
and Methods \cite{swebok2004}).




In Section \ref{sec:rw} the relation between our work and existing methodologies
is underlined. In Section \ref{sec:mbc} the \emph{Maintainable-by-Construction}
software development design principle is introduced. The EMBSE
methodology is defined in Section \ref{sec:embse}, pointing out how it is related to
some recognized software evolution paths. In Section \ref{sec:casestudy} a real-world
application realized applying EMBSE is described. Finally, Section \ref{sec:conclusions}
is dedicated to concluding remarks.

\section{Methods and Principles}
\label{sec:map}

\subsection{Executable Model Based Software Engineering}
\label{sec:mbc}

In this paper we recognize a novel design principle for software development that
we name \emph{Maintainable-by-Construction}. Together with it, we introduce a
methodology based on such a principle, called Executable Model Based Software Engineering (EMBSE).
EMBSE is inspired by Model Driven Development since in it the software development
team chooses to work using an (a) \emph{executable} and (b) \emph{semantically well defined}
modeling language (i.e. a 4th Generation Language or 4GL) to express a model of the software system itself.
In a second phase the model is linked to a set of \emph{concrete actions} (e.g. components developed
using existing programming languages (3GL), or remote webservices, \ldots ).
The link between the model and the actions is established by means of proper \emph{action interfaces}.

Two (partially overlapped) domains where executable 4GLs have been vastly adopted are Business Process Management (BPM)
and Service Oriented Architectures (SOA).
Both use business process modeling languages such as BPEL, BPMN, or other workflow languages.
The rationale behind it is that, in such domains, the core of the software system
is the business process. The choice of a 4GL is also motivated by the fact
that, when the business needs change, the business process manager may decide to
change the overall process and continue to use (part or all) the basic activities
already used in the previous process.

In our opinion, such principle can be adopted outside the BPM and SOA domains as well.
The advantages in terms of software maintainability stem from the separation of
the software system design description and the implementation details of the
required actions. In order to show the applicability of EMBSE,
we show how it has been adopted to solve a real-world problem
that does not belong to the domain of traditional BPM and SOA problems.

The EMBSE methodology has been also inspired by the so-called \emph{simplicity} principle
of the Agile Manifesto, meant as \emph{``the art of maximizing the work not done''} \cite{Highsmith2001}.
\begin{comment}
every piece of software system specified via an abstract and
executable model is a shorter (and less error-prone) specification for a longer
piece of code written using regular programming languages.
\end{comment}
At the same time, we avoid a common pitfall of agile methodologies,
namely the substantial absence of documentation describing the system
\emph{architectural specification} \cite{Selic2009a}.
\begin{comment}
meant as a \emph{``technology independent description of the higher-level structure
and behavior of the system''} \cite{Selic2009a},
we claim that EMBSE  methodology helps to provide such missing piece of information.
\end{comment}

Finally, since EMBSE demands modeling languages with a precise semantics, the software
engineers may employ formal verification techniques during the software verification phases.
For example, software architects could express some high level system
specification via a given temporal logic that in turn can be model-checked. When
an in-depth system restructuring is required,
each formal specification could be used as a regression test to be verified again
after the reengineering task has been completed.

%%In Section \ref{sec:rw} the relation between our work and existing methodologies
%%is underlined. In Section \ref{sec:mbc} the \emph{Maintainable-by-Construction} 
%%software development design principle is introduced. The EMBSE
%%methodology is defined in Section \ref{sec:embse}, pointing out how it is related to
%%some recognized software evolution paths. In Section \ref{sec:casestudy} a real-world
%%application realized applying EMBSE is described. Finally, Section \ref{sec:conclusions} 
%%is dedicated to concluding remarks.  

\subsection{Maintainable-by-Construction}
\label{sec:mbc}

\emph{Correct-by-Construction} (CbC) is a design principle requiring that, at every step,
from requirement elicitation down to system deployment, errors are not introduced and thus
the final software is correct.
In a similar way, \emph{Maintainable-by-Construction} (MbC) is a design principle
requiring that, at every step, the amount of generated artifacts is minimal.
This makes a software maintainable with a reasonable effort even under the ``extreme''
conditions reported in Section \ref{sec:intro}:
distributed development, business and technology rapid changes, need for software certification.

Similarly to CbC, MbC methodologies should use a \emph{top-down} approach to
build software starting with abstract descriptions.
On the other side, MbC development does not rely on
automatic transformations in order to produce progressively refined models, up
to an executable implementation, which is the core of CbC design.
The MbC point of view is, instead, to avoid the generation of concrete artifacts as far
as possible (e.g. the generation of source code). The underlying
principle is that \emph{the less artifacts are needed by the system, the lower is
its maintenance costs}, and also the lower is the risk to introduce errors during
restructuring operations. Therefore, the preferred approach consists in \emph{providing
only the missing pieces of information} in order to make the whole system executable.

In MbC, it is fundamental to rely on some \emph{modeling language} that concisely describes
the core design of systems. Therefore, such a language should be:
\begin{itemize}
  \item semantically well defined;
  \item executable.
\end{itemize}

\section{The EMBSE methodology}
\label{sec:embse}

The Executable Model Based Software Engineering (EMBSE) methodology is depicted
in Fig. \ref{fig:embse}.
EMBSE is a variant of the classic spiral model and applies the
Maintainable-by-Construction design principle. It targets software systems that \emph{evolve} and
it is intended as a more specific form of \emph{maintainability}\cite{Godfrey2008}. In this
sense, EMBSE considers from the beginning the eventuality of mutable requirements,
and aims at minimizing the impact of such changes on the system structure.
A key feature of EMBSE is that a single modeling language $\cal ML$ is shared
among almost all its phases.

\fs{below, rename Original System with Existing System. also modify the picture
and the text to explain that the existing system is a collection of artifacts from
a previous project (e.g. draw multiple sheets). also, draw a paper sheet divided
in two parts, meaning that we exchange the artifacts together with its quality
measures}
All along EMBSE, a set of artifacts must be produced, namely:
\begin{itemize}
  \item \emph{Requirement documentation}:
  	it can assume various forms, depending on the application domain and project
  	needs. Two possible alternatives are a list of precise and concise functional
  	and non-functional system requirements, or a collection of use stories
	\cite{nuseibeh2000}.
  	
  \item \emph{System model}:
  	written in the modeling language $\cal ML$, it precisely describes the design skeleton of the
  	system. In this artifact, a description of the system core components and how they operate
  	is given at a sufficiently abstract level. The level of abstraction
	depends on the chosen $\cal ML$. Moreover, it should convey information
  	about what are the \emph{concrete actions} involved along the system execution,
  	and in what order and under which conditions they should be invoked. If actions
  	take data as input and output, $\cal ML$ should also clearly state how
  	data flows.
  	
  \item \emph{Action interface}:
  	it is the contact point between the abstract system model and the concrete actions.
  	In order to be fully executable, the system model needs to invoke existing actions (e.g.
  	remote services or components written in the programming language $\cal PL$,
  	such as methods, or functions). This interface is also responsible for declaring
  	the actions input and output.
  	
  \item \emph{Concrete actions}:
  	this is the set of all actions written in some existing programming language $\cal PL$.
  	They can be deployed locally or remotely as service. Concrete actions are
  	usually not directly executable applications, but pieces of code that, when
  	plugged into the proper context, can be executed and process the input data
  	to produce some result.

  \item \emph{Formal specifications}:
  	they collect a
  	set of formal specifications using some formal language $\cal FL$. Such specifications
  	can be described using known formalisms, like Z or various temporal logics.
  	In case of system reengineering, having formal specifications has a twofold advantage:
  	(a) it drives software engineers to consider whether such
  	assumptions on the system are preserved after the transformation, and
	(b) in case they are preserved, they can be verified again after the reengineering.
  	
  \item \emph{Test cases}:
  	concrete actions can be considered as the building bricks of the overall system
	and, thus, they are expected to accomplish a well defined task. When the system
	model is linked to the concrete actions, the
	behavior of the overall system emerges. While concrete actions can be tested
	through unit-test cases written in the same programming language $\cal PL$,
	integration-test cases could be defined using some ``recording'' mechanism or
  	specialized languages.

  \item \emph{Verification report}:
  	this artifact is used to summarize the results of a specific verification or
  	testing phase. The specific artifact form can be more or less structured,
  	depending on the tools adopted to drive the tests.

  \item \emph{Original system}:
  	when dealing with reengineering, this is the collection of all the components
  	written using one or more existing programming languages ${\cal PL}_1$, \ldots,
  	${\cal PL}_n$, together with any accompanying documentation.
  	
\end{itemize}

\begin{figure}[t]
\centerline{
\includegraphics[width=\linewidth]{../figures/methodology-ab.pdf}
}
\caption{\label{fig:embse} The EMBSE software (re-)engineering cycle}
\end{figure}

The EMBSE process model is depicted in Figure \ref{fig:embse};
it consists of the following \emph{steps} and involves the following \emph{roles}:
\begin{itemize}

  \item \emph{Requirement Elicitation}:
  	here the \emph{requirement engineer} collects the
  	stakeholders needs, i.e. \emph{what} the system is supposed to do (i.e. functional
  	and non-functional requirements). The input here are the interviews with
  	stakeholders and the previous requirement documentation, in case of successive iterations.
  	The output is a new requirement documentation.   	

  \item \emph{Analysis \& Design}:
  	here the \emph{software engineer} identifies \emph{how} the system should operate.
  	In this phase, the designer takes as input the \emph{requirements documentation} and
  	produces the \emph{system model}.

  \item \emph{Design Validation \& Verification}:
  	in this phase the \emph{software engineer} attempts to detect, as early as
  	possible, design errors. Since the modeling language $\cal ML$ is required
  	to be semantically well defined, design validation and verification can be
  	done via simulations or through formal reasoning. The choice depends on the
  	application domain and costs-benefits
  	analysis. If the language $\cal ML$ is comprehensible enough to non-technical stakeholders,
  	they can be involved during this phase, in order to obtain a stronger feedback on the design.
  	The input of this phase is represented by the \emph{system model}, the \emph{requirement
  	document} and \emph{formal specifications}. The phases produces a \emph{verification report}
  	tracking any issues discovered at this stage.

  \item \emph{Reuse}: in this phase the \emph{software engineer} should take care
  	of identifying which of the \emph{concrete actions} mentioned by the $\cal ML$ model are
  	already available and which ones should be implemented from scratch. The output of this
  	phase is the \emph{action interface} document. If a test-driven approach is
  	preferred, black-box \emph{unit-test cases} for the actions to be developed
  	can be produced at this phase.

  \item \emph{Coding}:
  	here the \emph{software developers} take as input the \emph{action interfaces}
  	document and the \emph{requirement document}, to produce the \emph{concrete
  	actions} written in $\cal PL$.

  \item \emph{Testing}:
	here the \emph{software testers} are responsible for testing the system. In this
	phases \emph{unit-test cases} and \emph{integration-test cases} are produced
	and run, together with test cases produced during the Reuse stage. A
	\emph{verification report} is produced.

  \item \emph{Reverse Engineering}:
  	this phase should be used as entry point in EMBSE for a legacy project built using
  	different software development methodologies. Here the \emph{software analyst}
  	takes as input the original system \emph{source code}, together with its
  	\emph{system documentation}. The produced output is a \emph{system model}
  	written in $\cal ML$ together with a set of \emph{concrete actions} written in $\cal PL$.
  	
  \item \emph{Restructuring}:
  	here the \emph{software analyst} takes as input the \emph{system model} and
  	\emph{concrete actions} produced during the \emph{reverse engineering} phase
  	and his/her task is to refine them. The outcome is a new set of \emph{system
  	models} and \emph{concrete actions}.

  \item \emph{Quality Assurance}: 
    it is the sum of all the activies aiming at managing the quality of the 
    software project under analysis. Typically it would involve
    the measurement of several aspects at each other phase. For example, during
    the Coding/Forward Engineering phase one may apply metrics such as lines of
    codes, or \fs{add...}, during Requirement Elicitation the number of function
    points and their costs can be analyzed while during Testing the number of
    bugs per lines or the code coverage can be reported.\fs{add citations}.
    \fs{should we mention here special quality assurance measures for formal
    methods? e.g. number of requirements that can be translated as specifications,
    number of specifications that can be checked, number of counterexamples found,
    number of refinement steps required, if CEGAR or other abstraction-and-refinement
    techniques are used...}
  	
\end{itemize}

\begin{figure}[t]
\centerline{
\includegraphics[width=\linewidth]{../figures/methodology_uml_artifact.png}
}
\caption{\label{fig:embse-data} The artifacts produced by EMBSE \fs{see Andreas' notes on how to modify this}}
\end{figure}

In Fig. \ref{fig:embse-data} a representation of the relation between the methodology
phases and the produced artifacts is given. The cogwheels indicate the phases,
while the arrow direction determines if an artifact should be considered an input
or an output of the phase itself.

It should be noticed that reengineering is commonly divided in three stages: 
\emph{reverse-engineering}, 
\emph{restructuring}, and 
\emph{forward-engineering} \cite{Byrne1992,Chikofsky1990}. 
In EMBSE, the first two are explicitly named, while the third on is given by the 
sequence of the remaining steps in the loop.  
 
Furthermore, the reverse-engineering phase can be done manually or with some tool.
Since the process of software reengineering, by its very nature, must deal with
uncertainty and approximation,
the engineer supervision is fundamental during this phase.
Another aspect, that must be taken into account when EMBSE is used for reengineering,
is the choice of the modeling language $\cal ML$.
This choice depends on the available reverse-engineering tools for the original programming
language (e.g. some use dynamic approaches \cite{Walkinshaw2007}, while others prefer
static analysis methods \cite{Abadi2012}).

Even during \emph{restructuring}, human supervision is fundamental.	Automatic tools,
though, can help to focus the analyst attention on \emph{points-of-interest}
along the extracted $\cal ML$ models and $\cal PL$ concrete actions. If refactoring tools are
available for languages $\cal ML$ and $\cal PL$ (e.g. see \cite{Mens2004} and \cite{Abadi2009}
respectively), they should be used during this phase.

\subsection{EMBSE for evolving software}

Boehm and Lane classified the following different ways in which a software may evolve \cite{Bohem2010,Boehm2010b}:
\begin{itemize}

  \item \emph{Single step}:
  	the system is built from beginning to end, because the system
  	has a low probability of changing its functionalities.
  	
  \item \emph{Prespecified sequential}:
  	the system has a low probability of changing its functionalities, but the
  	stakeholders believe that waiting for the full system to be specified would
  	imply a loss of important mission capabilities.
  	
  \item \emph{Evolutionary sequential}:
  	the software development team produces an initial working product. The operational
  	experience provided by the deployed system will help to identify, design and
  	prioritize the next system requirements.
  	
  \item \emph{Evolutionary overlapped}:
  	it is like \emph{evolutionary sequential}, but the next increment in the
  	systems functionalities depends on the availability of so-called \emph{increment
  	enablers} (e.g. a more mature technology, or an external system functionality, \ldots).
  	
  \item \emph{Evolutionary concurrent}:
  	in this software evolution model, three teams are involved: \emph{system engineers}
  	identify and specify the next increments, from its requirements; \emph{developers}
  	stabilize the current increment; and the \emph{verification and validation team}
  	performs continuous defect detection.
\end{itemize}

With respect to the above classification, for single step and prespecified sequential
software evolution other methodologies should be preferred (e.g. those based on
the waterfall or V-model).

The EMBSE methodology should be more profitably applied to the three evolutionary models: in such
contexts, it is possible to take advantage of the abstract representation of the
system design.
Indeed, changes leading to a reengineering of the system design may be specified
on the model, while changes to the low level concrete actions can
be applied through refactoring techniques or redesigning only the involved components.

In the case of evolutionary concurrent software, EMBSE is particularly suited.
The three teams, in fact, can work separately (and with specialized tools) on the
system models and components: the \emph{system engineers} are responsible for collecting new
requirements (or modify existing ones) and underline their implications on the
design expressed in $\cal ML$; the \emph{developers} implement and fix bugs on the
concrete actions, while the \emph{verification and validation team}
is in charge of discovering systems weakness through formal specifications written
in $\cal FL$, simulating executions of $\cal ML$ models or unit-testing actions
coded in $\cal PL$.

As already stated, the software development activity is increasingly becoming
a distributed engineering activity \cite{Meyer}. The introduction of
a distinction between abstract aspects of a system and concrete actions
ease the distribution of work and synchronization about
the contributors. Moreover, the \emph{action interfaces} recognized during the Reuse phase
can be used as contact point between the geographically dislocated teams
of designers and developers.

Finally, one of the main advantages of EMBSE over other methodologies is
the cost minimization of migrating a non-EMBSE project to an EMBSE project.
As a matter of fact, it can be chosen the most appropriate modeling language,
so that the new software models can be connected to the already developed software
libraries. This reduces the impact of the first software reengineering (i.e. from
a legacy application).
At this point, any further reengineering can be focused only on refining the
produced models and concrete actions of the obtained EMBSE project. This advantage is
very valuable, from the point of view of the industrial partner customers, whose
main desire is to minimize the cost of modernizing and maintain their existing
software systems.


\section{A pilot project}
\label{sec:casestudy}

%%\subsection{The problem}
\head{The problem}
\label{sec:cs-intro}
%
The industrial partner applied the EMBSE methodology to solve a network monitoring
problem: given a wide-area network, there is the need to check the working status
of each core switch constituting the network backbone and the so-called last-mile.

An application was designed that runs on a \emph{monitoring server}. The application
monitors each network segment between the server and a core switch.
A network segment is called \emph{link} and the status of a link could be one of
the followings:

\begin{itemize}
  \item \emph{up}: the router is reachable, no IP packet is lost;
  \item \emph{degraded}: the router is reachable, but IP packets are lost;
  \item \emph{down}: the router is not reachable.
\end{itemize}

The application goal was to open a ticket signaling when one of the two faulty
link conditions (resp. \emph{degraded} and \emph{down}) to the technical staff, so that they can solve the problem as soon as possible.
%
A further requirement of the monitoring system was that before opening a ticket, the
same link status should be tested three times, each time two minutes apart.\fs{is this sentence correct?}
%
Finally, after opening a ticket, the application should continue to monitor the
link, and in case the link switched to a non-faulty condition (link up), the
ticket should be automatically closed.

The application was first developed using a different software development methodology.
After that, it required to be reengineered twice.
At this point, the industrial partner decided to use this application as a pilot project
in order to evaluate the EMBSE methodology.

This allowed to test one of the key advantages of the EMBSE methodology: namely, the
fact that it aims at minimizing the effort of converting a non-EMBSE project
into an EMBSE one.
After the first reengineering process, the industrial partner continued to apply
the EMBSE methodology and they claim that it considerably eased the second
reengineering process.

The application has been installed in 7 Italian telecommunications service providers,
two of which are nation-wide telecommunication companies. 
The monitored WANs range from a minimum of 4000
monitored routers, to a maximum of 80000. On the smallest network 90 tickets
per day are opened, while on the biggest 6000 tickets per day are opened. On average,
97\% of those tickets are opened automatically, and through them the customers can be
proactively called by the customer care service and be notified when network problems happen.

%\subsection{The choice of languages}
\head{The choice of languages}
%
\fs{talk first and more about formal specification; say that the formal 
specifications were crucial for the success of the project, but the other languages
were given by the company }
First of all, in order to apply the EMBSE methodology, the industrial partner had to choose 
which modeling language ($\cal ML$), programming language ($\cal PL$), and formal 
specification language ($\cal FL$) should have been adopted.

Regarding the \emph{modeling language}, $\cal ML$ should be the most appropriate
to represent ``what'' the software must do at the right abstraction level
(see Section \ref{sec:embse}).
In this case, the problem presents some real-time aspects (there are temporal
constraints that affect the system behavior, see Section \ref{sec:cs-intro}).
One of the most used models for real-time systems is represented by timed automata \cite{navet2010modeling}.
For the above reasons, we decided to use timed automata and to graphically depict them by means of statecharts
(a very popular modeling language \cite{Harel2007}), with temporal constraints as guards,
and encode them as XML documents (in according to the proposal of W3C \cite{scxml}).
Furthermore, such modeling language has a precise and well-defined operational semantics,
therefore it can be used as an executable language.
Finally, pieces of code (in a given programming language $\cal PL$) or even other automata
can be attached to each state (similar to nested states in UML State Diagrams).
This is used to link the high-level abstract model to concrete actions.
We called this modeling language XAL \cite{Campana2008, Campana2010}.
%
Each XAL automaton can change its state due to one of the following events:

\begin{itemize}
  \item an internal transition,
  \item the passage of time,
  \item the creation of a new running automaton,
  \item the synchronization between two automata.
\end{itemize}
%
Synchronization happens using send-receive rendez-vous, during which the
two involved automata can exchange data. Also when a new automaton is created,
it can receive some data. A detailed description of the XAL semantic rules is given
in \cite{Spegni2012a}.
%
Finally, the XAL execution environment allows to decide whether each running automaton
should be executed as a different process or as a different thread, and how many
threads should be mapped per process. XAL thus is suitable for designing concurrent
systems.

\begin{comment}
State-based representations proved to be very useful when building software
systems, to communicate design descriptions between the technical and non-technical
stakeholders \cite{harel2007}. It is a fact that state-based languages
have survived as documentations means through several evolutions of programming
languages. In some cases such state-based descriptions have a formal semantics,
like Harel's Statecharts \cite{harel2007}, while in other cases they are used with
a semi-formal semantics, like UML state diagrams. Also, in the application domain
of real-time systems, state based formal languages like Timed Automata are
adopted \cite{Bouyer2008}.
\end{comment}

Regarding the \emph{programming language} $\cal PL$, the industrial partner decided to use PHP.
This choice has been motivated by the opportunity to reuse several existing libraries
already used by the industrial partner.

Regarding the \emph{formal specification language} $\cal FL$, we decided to adopt
the Uppaal subset of TCTL as specification language. This was motivated by the fact
that XAL models can straightforwardly be translated into Uppaal models.

XAL development is supported by an IDE based on Eclipse, allowing the developers 
to work on XAL models and PHP code uniformly. The generation of Uppaal models was not
automatized at the time of this pilot study, but since XAL is defined as an
executable language on top of Timed Automata semantics, an automatic translation 
can be easily implemented. For this pilot study
the correctness of the translation was checked inspecting the source and target
models.

%\subsection{The initial system}
\head{The initial system}
%
The initial implementation of the application, introduced earlier in this
Section, was developed using a different, more agile, software development
methodology. The PHP language and its Object Oriented programming paradigm were
used.
The application was divided in two parts: a \emph{system monitor}, responsible for
probing the routers and store the collected results on a database, and a
\emph{ticket controller} in charge of reading the produced data, correlate them and
decide whether and what kind of ticket should be opened. Tab. \ref{tab:system_2004}
reports some metrics on the implemented system.

The development of the initial system required nine man-months, one software analyst and two software developers.

\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{3}{|c|}{ticket controller} & \multicolumn{3}{|c|}{system monitor} \\
\hline
LOC & Classes & Methods & LOC & Classes & Methods \\
\hline
835 & 1 & 8 & 7531 & 5 & 38 \\
\hline
\end{tabular}
\caption{\label{tab:system_2004}Statistics for the initial software system}
\end{center}
\end{table}

\head{The specifications}
\fs{specify whether 4 spec were enough, or more spec were needed }
A fundamental requirement of the application is that the system does not reach a deadlock configuration, that can be expressed using the following Uppaal specification:
\[
\textit{AG} \neg \textit{deadlock} ~~ (S1)
\]
Another requirement is that it must be possible to open tickets for the \emph{down}
or the \emph{degrade} conditions.
\begin{eqnarray*}
\textit{EF}_{>=0} \textit{TICKET\_DOWN} ~~ (S2) \\
\textit{EF}_{>=0} \textit{TICKET\_DEGRADED} ~~ (S3) \\
\end{eqnarray*}
Another important requirement is that, for the same link, it is not possible to open
two different types of tickets, expressed in TCTL as follows:
\[
\textit{AG}_{>=0} \neg (\textit{TICKET\_DOWN} \land \textit{TICKET\_DEGRADED}) ~ ~ (S4).
\]  

%\subsection{The reengineered system}
\head{The reengineered system}
%
During the two application reengineerings, only the \emph{ticket controller} was
redesigned, while the \emph{system monitor} was kept unmodified.
During the first one, the reverse engineering EMBSE phase was applied manually
and the restructuring phase was assisted by the XAL IDE.
In order to accomplish this task, the software engineers run several simulations to
verify the correct behavior of the reengineered system, w.r.t. the behaviors of the
original system.
During the second reengineering, only a new iteration of the EMBSE methodology
was required, without any reverse engineering.

The first reengineering was motivated by the need to introduce one more condition
to monitor, in the network. Namely, the company wanted to measure which one of the
following conditions holds at any moment:
\begin{itemize} 
  \item \emph{full-band}: the router traffic is exceeding the bandwidth limit;
  \item \emph{not-full-band}: the router traffic is currently below the bandwidth 
        limit.
\end{itemize}
The ratio behind this requirement was the following: in case a network degrade 
is detected and \emph{contemporary} the network traffic exceeds the bandwidth 
limit, the system can automatically recognize the high network load as the potential
cause of the network degrade. Otherwise, the staff must investigate in order to 
find the cause of the network degrade. 

This reengineering introduced the need to \emph{correlate events} locally before 
taking decisions globally. 
Also, while the initial system was designed to sequentially test every link, 
soon the company decided to redesign the application as a concurrent system in
order to minimize input/output waiting times and exploit modern multicore 
architectures.

Table \ref{tab:spec} reports the results of the Design Validation and Verification 
phase of the first reengineering of specifications S1 - S4.

\begin{figure}[t]
\centerline{
\includegraphics[width=\linewidth]{../figures/1st_reeng.png}
}
\caption{\label{fig:1st_reeng} Overview of the XAL model after first restructuring}
\end{figure}


\begin{table}[b]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Phase & Initial system & $1^{st}$ reeng. & $2^{nd}$ reeng. \\
\hline
Reverse-engineering & - & 1 & - \\
Restructuring & - & 2 & - \\
Requirements analysis & - & - & 0.5 \\
Design & - & - & 1 \\
Design V \& V & - & 1 & 0.5 \\
Reuse & - & 0.5 & 0.5 \\
Coding & - & 1 & 1 \\
Testing & - & 1 & 1 \\
\hline
Total & 12 & 6.5 & 4.5 \\
\hline
\end{tabular}
\caption{\label{tab:times}Development times (in weeks) for the case study}
\end{center}
\end{table}

In Tab. \ref{tab:times} it is reported the time required for the initial development 
and the next two reengineering processes. 
In Fig. \ref{fig:1st_reeng}, an overview of the manually 
reverse-engineered and restructured XAL model is given. 
%%Even though for the sake 
%%of space and clarity it is not worth to cover all its details, it still conveys 
%%the complexity of the model. 
For the sake of space and clarity, it is not possible to report all its details. 
Nevertheless, the image still conveys the complexity of the
obtained model, composed of 28 states and 114 transitions. Such complexity is due to
the choice of explicitly representing the system correlation states. Nevertheless, the XAL model
clearly expressed the application correlation logic, and allowed design validation
by means of simulations. The model complexity problem has been easily overcome 
during the second reenginering. PHP LOCs were reduced by an 80\%, as summarized in 
Tab. \ref{tab:system_20082012}.
This is not a surprise, since a big part of the code was replaced by the adoption
of the XAL interpreter. This made considerably easier the software maintenance and 
system debugging tasks. 

A second reenginering was also needed because of the following new requirements:
\begin{itemize}
  \item the number of checks to be done before opening a ticket, should be 
  	parameterized depending on the network type (e.g. LAN, WiFi, \ldots);
  \item when the link is in state (degraded,not-full-band), the router CPU load also should be checked:
  \begin{itemize}
    \item if greater than a given threshold (e.g. 20\%), a (degraded,not-full-band,cpu-high) ticket
      should be opened;
    \item otherwise, a (degraded,not-full-band,network-error) ticket should be opened;
  \end{itemize}
  \item the mentioned thresholds (e.g. number of packets lost and CPU load) should
  	be parameterized depending on the network type;
  \item decision procedures should be made more \emph{modular} and the overall 
  	correlation more scalable in case new decision criteria will be identified
  	in the future.  
\end{itemize}

\begin{figure}
\begin{minipage}{\textwidth}
  \begin{subfigure}{0.45\linewidth}
    \centering
    \includegraphics[width=0.9\linewidth]{../figures/2nd_reeng_tdown.png}
  \end{subfigure}\\[1ex]
  \begin{subfigure}{0.45\linewidth}
    \centering
    \includegraphics[width=0.9\linewidth]{../figures/2nd_reeng_cldeg.png}
  \end{subfigure}
\end{minipage}
\caption{\label{fig:2nd_reeng}Two XAL models after $2^\textit{nd}$ reengineering}
\end{figure}

\fs{if we introduced specific measures that deal with formal methods, refer to them
here to show that they can be applied and are useful to drive the software reengineering activity}
Table \ref{tab:system_20082012} shows some metrics of the reengineering collected
during the reengineering cycles of the ticket controller, while Fig. 
\ref{fig:2nd_reeng} gives an idea of how the XAL model in Fig. 
\ref{fig:1st_reeng} was
divided and how it looked like after the second iteration of the design phase, 
occurred during the second reengineering round. At the end, every XAL model has on
average 7 states and 10 transitions, which makes easier to understand them, as well
as debug and maintain the overall system. 
In order to validate the reengineered system, the specifications $S1-S4$ 
were verified again (see Tab. \ref{tab:spec} for the outcomes). 
In order to validate further the reengineered ticket controller, the reengineered 
system was executed in a parallel environment in order to observe its behaviors
and compare them with the previous version of the ticket controller. This was 
required because of course model checking can detect faults in the abstract design 
of the system, but not in the actual execution environment (e.g. the correct 
configuration of the operating system, or of the network).

\begin{table}[b]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
 & \multicolumn{3}{c|}{1st reengineering} & \multicolumn{3}{c|}{2nd reengineering} \\
\hline
 & Result & CPU Time & Memory & Result & CPU Time & Memory \\
\hline
S1 & yes & & & yes & & \\
S2 & yes & & & yes & & \\
S3 & yes & & & yes & & \\
S4 & yes & & & yes & & \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:spec} Design V. \& V. of first reengineering}\fs{complete}
\end{table}


\begin{table}[b]
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
& Initial system & $1^{st}$ reeng. & $2^{nd}$ reeng. \\ 
\hline
\# LOC & 835 & 163 & 284 \\
\hline
\# classes & 1 & 2 & 2 \\
\hline 
\# methods & 8 & 17 & 26 \\
\hline
\# webservices & - & 3 & 3 \\
\hline
\# models & - & 1 & 7 \\
\hline
\# states & - & 28 & 45 \\
\hline
\# transitions & - & 114 & 72 \\
\hline
\end{tabular}
\caption{\label{tab:system_20082012}Statistics for the reengineered ticket controller}
\end{center}
\end{table}

\section{Related work}
\label{sec:rw}

\fs{introduce early the rw of formal methods. reuse some RW from project, especially when discussing what good formal methods can bring to sw eng. mention Hoare's certified compiler project, here or in later sections}
One of the fundamental ideas over which this work is based consists in
minimizing the amount of produced artifacts in order to ensure a long life
to a software.
Obviously, this straightforward idea is not new, and it has
been applied in several areas of Software Engineering \cite{Boehm2006}. In particular:

\begin{itemize}
  \item software reuse, via component based and service based software engineering;
  \item model generation, where agile methodologies argue that the amount
  	of generated software models (mostly for documentation), should be minimized or
  	avoided at all.
\end{itemize}

Nevertheless, such applications have contributed to define software development
methodologies more focused on forward engineering than on software maintenance.

On the other hand, the well known RUP methodology requires the software engineers to produce a considerable
amount of software models during preliminary analysis phases (e.g. requirements
elicitation, CRC-cards, \ldots), but almost always such artifacts
do not help to reduce the amount of code generated later on, thus increasing the
cost of maintaining the entire software projects, together with its artifacts.
Even Agile methodologies like Extreme Programming or AMDD admit the use of
software models, but almost only for documentation purposes. Doing so, the information
contained in the original software models must be coded at an high level of details, and
consequently it is partially duplicated. This increases the size of
the software artifacts, which in turn increases the maintainability costs of
the software project

The EMBSE software development methodology described in Sec. \ref{sec:embse}
is related to several areas of Software Engineering research, and in particular:
\begin{itemize}
  \item Model Driven Development;
  \item Software Reengineering;
  \item Business Process Management;
  \item Service Oriented Architecture;
  \item Distributed Software Development;
  \item Formal Methods.
\end{itemize}

With each one, our work shares some objectives but differs significantly on others.

\emph{Model Driven Development.}
Like Model Driven Development (MDD) we believe software models should play a
central role in order to ensure a higher quality and a more maintainable software.
%
Model Driven Architecture (MDA) is an architecture defined by OMG in order to realize MDD \cite{Kent,Deursen} .
It suggests a set of meta-languages to define domain specific languages and translation
tools. With respect to it, our work differs because it is focused on how software engineers should work.
Applying EMBSE, the software engineers may even decide to apply tools suggested by MDA.
%
Another MDD approach is Executable UML (xUML) \cite{Mellor2002}.
xUML specifications consist of a \emph{class diagram},
a \emph{statecharts} diagram, and the \emph{state procedures}.
With respect to it, we share the dichotomy between the executable modeling language and
the invoked activities.
On the other side, EMBSE does not mandate the use of any modeling language,
while xUML heavily relies on UML. We argue that such choice should be left to the
developing team, since modeling languages are part of the technology available at
the moment and thus subject to obsolescence.
Furthermore, with EMBSE, it is possible to employ domain-specific modeling languages,
that is recognized as one of the key advantage of MDD by several authors
\cite{Deursen,Hailpern2006,Selic2003}.
%
Both xUML and MDA are focused on automatic generation of executable code from the
models. This introduces two main issues:
(a) models must be synchronized with the actual executing code, and
(b) while software is generated at the abstract level of the chosen modeling language,
it will be debugged at the concrete level of the generated code.
With EMBSE, on the contrary, the composition of models and concrete actions can be
directly executed without further translations. This avoids the generation of unused
artifacts and, with the help of the runtime environment, it allows to debug
each component of the system adopting the same abstraction level employed during its
design and implementation.
%
An alternative approach is provided by Agile MDD \cite{Practices,Ambler2003} (AMDD).
AMDD, though, mainly focuses on the integration of modeling technologies to XP software
development.
In it, the adoption of models is suggested mostly for documentation
purposes and partial code generation, sharing the weakness already underlined for
xUML and MDA.

\begin{comment}
Some
authors, in fact, point out that Model Driven approaches usually don't face the
problem of integrating the code obtained by applied transformation, with already
existing code (sometimes referred as ``foreign code'' \cite{Deursen, France2007}
\pdfcomment{add citations}.
For example, using our methodology, it is possible to orchestrate local
actions with remote services in a ``more natural'' way (they both are thought as generic actions).
In MDD with xUML everything that is not local must be ``wrapped'' in some object,
and this represents a small forcing imposed by the technology, not required by the
problem.
\end{comment}

\emph{Software Reengineering.}
Software Reengineering studies several approaches and aspects relevant to legacy
software modernization. A widely accepted approach divides the process in three-steps:
\emph{reverse-engineering}, \emph{restructuring}, \emph{forward-engineering}
\cite{Yu2005,Brand1997a}.
We argue that, after a legacy software system has been reverse-engineered,
and the obtained model has been restructured to fulfill the new requirements,
no code should be generated from the model during the forward-engineering phase.
If new functionality are needed, they should be defined as concrete actions
invoked by the model. This eliminates the need for future reverse-engineering phases,
when the system will be reengineered again.

\begin{comment}
that are error prone, if conducted automatically, and tend to introduce ``noise''
into the extracted models.
The latter is especially evident if some concrete examples
from the literature are analyzed. When a finite-state model is (semi-) automatically
extracted from the source code, usually generated state names are not easy to
understand, or added states result to be not informative as too concrete or too abstract
\cite{Choi2010} \pdfcomment{add citations}. While manually labeling the automatically
extracted model can be considered an affordable cost, it seems a waste of time to regenerate
source code from such models, that are likely to be reverse-engineered again.
Our argument is that, whenever possible, it should be preferred to execute the
model directly, avoiding the need to keep it synchronized with the actual code.
Indeed, the model \emph{becomes} part of the actual code.
\end{comment}

\emph{Business Process Management (BPM).}
The BPM research area, as already stated, provided an initial motivation for our work.
For instance, in BPM, reengineering means the adaptation or change of business processes
according to some new or modified needs \cite{Ko2009}.
Nevertheless, EMBSE has been thought for a set of applications vaster than
business process management.
As a matter of facts, this work shows an application that is outside the scope of usual BPM.
%
Furthermore, our approach is more focused on software reengineering.
Indeed, it explicitly takes into account reverse-engineering for legacy applications.
On the other hand, this does not mean that technologies developed for BPM cannot be used in EMBSE.
For example, languages such as BPEL and BPMN (2.0) can be used in EMBSE since they both are executable.

\emph{Software Oriented Architecture (SOA).}
SOAs deal with issues such as service orchestration and choreography, among others.
SOAs require the adoption of specifically designed middleware and protocols to
deploy services. We believe EMBSE can fit the SOA systems development: employing
a proper modeling language, the invoked actions could be external services,
and the overall software system could be deployed as a service.
With respect to some MDD based SOA development methodologies \cite{Leotta2012a},
EMBSE shares the same advantages already mentioned for xUML, MDA and AMDD.
\begin{comment}

\fs{clarify}
EMBSE aims at being more flexible, and the added flexibility
\fs{how to clarify what we mean by "flexibility"?}
make it possible to apply EMBSE also to those IT infrastructures that cannot afford the complete renovation usually
imposed by a full paradigm shift, such as SOA. This kind of reluctancy to in-depth changes of the
IT infrastructure are common, in real-world IT companies, either for technical
reasons or for company policies.
On the other side, we retain the value of
interoperability across organizational boundaries: software systems produced with
our methodology can easily be exported as services as well as invoke already defined
external services. One of the first advantages of our ``mixed'' approach is that
it is easier to mix-up invocations of external services with internal already
defined procedures (or legacy code). In other words, there is no need to wrap
existing business logic in a service, but (at least in principle) it can be
invoked as such. This of course avoids the introduction of new bugs \fs{why?}
and decreases the costs maintaining the (re-)engineered software system
\fs{add references?}
\end{comment}

\emph{Distributed Software Development}
Software engineering researchers and practitioners are increasingly focusing on how
development methodologies should be adapted to face issues due to
a distributed development \cite{Nordio,Hawthorne}.
Field experiences underline the importance of good synchronization and
the need for un-ambiguous communication, especially for software requirements and design decisions.
EMBSE divides the source code in three main artifacts (viz. the models, the actions, and the interfaces between them),
therefore, it can be successfully applied to coordinate work in distributed software projects.
Furthermore, EMBSE can be applied together with Design-by-Contract paradigm, that
proved to be useful for distributed software development \cite{Nordio}:
action interfaces can be defined as contracts (with preconditions, postconditions, and assertions),
whereas actions can be coded with languages supporting the Design-by-Contract
approach.

\emph{Formal Methods.}
Woodcock et al. collected several applications of formal methods to industrial projects \cite{Woodcock2009a}.
From their survey, two main kinds of software verification via formal methods emerge:
verification of specifications and models (e.g. via model checking, theorem proving, \ldots)
versus in-code verification (e.g. run-time assertions, contracts, \ldots).
They also underline the effort of formalizing fragments of programming languages in
order to extract a semantically precise model of the software to be verified. EMBSE,
on the other side, requires to use abstract languages that are already semantically
well-defined. As a consequence, EMBSE suggests the adoption of
formal methods to perform a design validation before any code is written for the actions.

\section{Final considerations}
\label{sec:conclusions}

\fs{below, underline that the integration of software (re)engineering methodology
and model checking helped to achieve a higher confidence in the design, but 
the usual SE testing and quality assurance practices can not be avoided, to deal
with the other aspects of the project}
In this paper we introduced a software development methodology named Executable 
Model Based Software Engineering (EMBSE). It is an attempt to integrate aspects 
from several research areas such as Model Driven Development, Software Reengineering 
and Software Verification. 

The EMBSE methodology is intentionally described in rather general terms, in order to
underline that it could be applied to a wide variety of software projects. Nevertheless,
the successful application of EMBSE to a very specific software system is greatly
influenced by the chosen modeling language and the original application domain.

It should be underlined that it was not in the scope of this paper to describe
any specific software analysis tool to convert the original source code to the chosen
modeling language, in case the Reverse Engineering and Restructuring phases were
needed. The reverse engineering community provided many solutions over
the years, each with its specific advantages, limitations, and costs.
The software engineer is the only one able to decide what is the best,
between a manual reverse engineering or an assisted one, and in the latter case
which tool is more appropriate.
EMBSE is designed to be agnostic about the specific translation tool.\fs{cite papers}

In order to have a practical feedback on EMBSE, the industrial partner decided to adopt it
for a real industrial case in a pilot project.
At the end of this pilot project, the application has been satisfyingly installed by several customers
and, therefore, the industrial partner decided to adopt such a methodology as standard development procedure.
Regarding the pilot project, the methodology has been applied using XAL as modeling
language, PHP as programming language, and TCTL as formal language.
Nevertheless, the methodology wants to be agnostic regarding the language,
in order to minimize the obsolescence phenomenon. If the chosen modeling
language permits to express different execution threads, EMBSE can assist to reengineer
complex software systems to a distributed design. One of the characteristics
of the EMBSE methodology is that, in principle, multiple programming languages
$\mathcal{PL}_1 \ldots \mathcal{PL}_n$ can be adopted to code different concrete actions. This
feature is particularly useful when planning an incremental migration of the actions
from a given programming language to a different one.

The chosen pilot project has been manually reverse engineered, because the XAL
language does not provide a stable model extraction tool, yet. On the
other side, the project size was reasonably small, and the previous version of the
system has been realized by the same developers, justifying the manual approach.
The industrial partner is currently working on a prototype algorithm to automatically
translate legacy code (e.g. Cobol, \ldots) to XAL diagrams, in order to assist
the task of reverse engineering existing software projects developed by its customers.

A key advantage of EMBSE w.r.t. other common Model Driven Development approaches
derives from the choice to not generate code. In fact, many authors in the MDD
see the advantage of 4GL software models being compiled onto a 3GL source code,
in the same way as 3GL are compiled onto executable assembly code \cite{Pastor2008,Deursen,Hailpern2006}.
While we share this vision, from a practical point of view we believe that this
should be achieved letting the model interact with components written in some
existing 3GL. If MDD sees 3GL source code as ``low'' level code, many procedures
and sequences of operations can naturally be expressed using existing programming
languages. The introduction of UML Action Language to describe Executable UML actions,
for example, may be a relevant obstacle for IT companies and practitioners:
their employees, in fact, are probably already trained to use general programming
languages but should learn to use a new, very specific, language such as UML Action Language.
The main argument for the adoption of Action Language
is notably that it is abstract and platform independent. On the other side,
EMBSE allows the software engineers to develop their projects
using modern 3GLs that are provided with multi-platform compilers
and interpreters.

\begin{comment}
First of all, let assume a traditional software project is composed of a subsystem A written in
assembly language, and a subsystem B written in a some 3GL programming language
that invokes components in A.
In this case, source code in B is required to declare a minimum interface towards
the invoked components defined in A. The compilation of A and B, though, are
separate tasks, and the link and run-time environments are in charge of verifying the
interface between A and B is correctly declared. Also, the software engineer is
not required to compile B (the more abstract code) and then A (the more concrete
code). MDD approaches, on the other side, applies the latter approach, and this
is, in our opinion, the reason why they fail to maintain their promises in terms
of better maintainability and higher quality software.

A second important aspect to consider is that artifacts obtained from 3GL code
compilation are temporary files needed as long as they provide support for the subsequent linking
step that will produce the final executable code. This means that such intermediate
representation don't need any form of maintenance. The MDD approaches, on the other
side, produce 3GL source code that must be maintained with the rest of the project
even though it is suggested not to modify it.


The EMBSE methodology tries to overcome these pitfalls. Avoiding the generation of
any lower level artifact from its models, we claim that EMBSE correctly implements
the desired metaphor of compiling software models as we now compile 3GL source code.
From the technical point of view, defining the interfaces between models and actions
code required by EMBSE requires the same effort as defining the interfaces that
MDA and other MDD approaches employ to let the interaction between its models and
manually crafted source code.
\end{comment}

The EMBSE methodology aims at being a lightweight approach to MDD, the choice of
adopted modeling and programming languages is demanded to software analyst in order
to make suitable for importing existing software systems. In principle, components
written using different programming languages can be coordinated by the interpreter
of the chosen modeling language.

Nevertheless, our claim is that the same toolset adopted by MDA and Executable UML
can be integrated within the EMBSE methodology, if desired.


\section{Outlook}
\label{sec:conclusions}

Here to mention again the basic idea behind the paper

Then mentioning problems not solved yet and having influence
on the project (Upaal Syntax)

Then mentioning that we need to investigate more on 
measurement.


%\bibliographystyle{IEEEtran}
\bibliographystyle{abbrv}
\bibliography{2015-icst}

%%\balancecolumns

\end{document}
