\documentclass[10pt,conference,compsocconf]{IEEEtran}

\usepackage{comment}
\usepackage{graphicx}
\usepackage{caption}
\usepackage{subcaption}
\usepackage{color}

\newif\ifdraft
\drafttrue
%\draftfalse

\ifdraft
\newcommand{\fs}[1]{{\color{blue} {[FS: #1]}}}
\newcommand{\ab}[1]{{\color{red} {\bf{[AB: #1]}}}}
\else
\newcommand{\fs}[1]{}
\newcommand{\ab}[1]{}
\fi

\newcommand\head[1]{\smallskip\noindent\textbf{#1.}}


\author{
 	\IEEEauthorblockN{Andreas Bollin\IEEEauthorrefmark{1},
        Salvatore Campana\IEEEauthorrefmark{2},
 		Luca Spalazzi\IEEEauthorrefmark{3}, and
 		Francesco Spegni\IEEEauthorrefmark{3}} \\

 	\IEEEauthorblockA{\IEEEauthorrefmark{1}Alpen-Adria University of Klagenfurt,
		Klagenfurt, Austria\\
        andreas.bollin@aau.at} \\
 	\IEEEauthorblockA{\IEEEauthorrefmark{2}Computer VAR ITT,
		Verona, Italy\\
 		s.campana@computervaritt.it} \\
	\IEEEauthorblockA{\IEEEauthorrefmark{3} Universit\`a Politecnica delle Marche,
 		Ancona, Italy\\
 		\{spalazzi,spegni\}@dii.univpm.it }

}

\title{Maintainable-By-Construction - A Case-Study in Successfully Integrating Lightweight Formal Methods}

\begin{document}

\maketitle


\begin{abstract}
Nowadays software engineering faces the challenges of quickly adapting to changes in business needs, 
safety/security requirements, and technology. With this situation at hand, maintenance activities are of everincreasing 
importance. In this paper we describe the introduction of a lightweight formal software development methodology, 
following the \emph{``maintainable-by-construction''} principle and it's successful application at an industrial 
partner. The methodology forces engineers to develop a formal software model using some \emph{executable} and 
\emph{semantically well-defined} modeling language. And, by linking the model to \emph{concrete actions} (which might 
be components written in some existing programming language), the resulting software system can be executed directly.
The paper does not invent new notations nor core processes, but it aims at showing that by learning from other fields, 
formal methods can beneficially be applied to projects, and as such it tries to help making formal methods becoming more widespread.
\end{abstract}

\section{Motivations}
\label{sec:intro}

Among many others, software engineering principles and practices face a lot of challenges. A first challenge is \emph{change}, which is seen to be natural nowadays. It is dealt with (and promoted effectively) in the agile movement \cite{Highsmith2001}. However, due to the lack of formal evaluation techniques and rigorous planning, agile methods did not manage to novate the other existing techniques substantially -- especially when entering the domain of safety-critical systems. A combination of these two worlds could bring the best of both worlds (as mentioned by Wolff \cite{Wolff12} or Black et al.~\cite{Black09}).
Another challenge, \emph{distributed} software development, is witnessed by the increasing number of companies that decide to out-source (part of) their IT projects to developing teams working in different countries that do not share the same native language, nor the time-zone. An even newer trend is known as ``crowdsourcing'', where people over the internet are called 
(and often compete) to give their contribution to a software project \cite{Howe2006,Vukovic2009}.
Finally, there is the need for \emph{certifiable software}, and in the last decades a lot of different strategies 
were developed in order to ensure that third-party softwares were secure w.r.t. a given set of requirements \cite{Woodcock2009a,Ball2004,Security2011}. Additionally, standardization organizations and big corporations are introducing
formal methods along the software testing phases (see for example the DO-178C standard for avionic software
\cite{RTCA11a,Gigante2012} and the SWEBOK section on Software Engineering Tools and Methods \cite{swebok2004}).

To summarize, there is a need to consider at least the following three issues when developing software systems nowadays \cite{Boehm2010,Meyer}:
\begin{itemize}
  \item A lot of systems are socio-technological and it is expected to adapt to requirement changes constantly and immediately;
  \item The software development is distributed (even global);
  \item There is increasing attention on how to certify software quality.
\end{itemize}

Now, these three challenges have an impact on the quality of the final software product
and, furthermore, they also influence the tasks of maintaining and eventually reengineering 
a system. In addition, existing development methodologies are more focused on the \emph{system
construction}, rather than its \emph{maintainability}. With this in mind, we think that such
situations, also taking maintainability into account, bare a great potential for combining 
existing process models with (lightweight) formal methods. The reason is that the best way 
to ensure better software maintainability is to build software systems by using a development 
methodology that considers maintainability from the very beginning. Such a methodology 
should aim at minimizing the risk of losing knowledge about the system architecture, it should
maximize the effectiveness of communication among the actors involved in the development process, 
and it should clearly describe the software system behavior -- and we had the chance to take
part in a project with an industrial partner where we tried out to do so by introducing a
software development method which we called ``Maintainable-by-Construction''.

Section \ref{sec:mbc} introduces the \emph{Maintainable-by-Construction} (MbC)
principle. A software engineering methodology based on  it is then presented in 
Section \ref{sec:msem}. 
In Section \ref{sec:casestudy}, a real-world application (where we used the MbC-Methodology) 
is described. Then, Section \ref{sec:discussion} presents some lessons learned.
The paper concludes with related work in Section \ref{sec:rw} and some outlook in 
Section \ref{sec:outlook}. 

\section{The Maintainable-by-Construction Principle}
\label{sec:mbc}

\emph{Correct-by-Construction} (CbC) is a design principle requiring that, at every step,
from requirement elicitation down to system deployment, errors are not introduced and, thus,
that the resulting software is correct.
In a similar way, \emph{Maintainable-by-Construction} (MbC) is a design principle
requiring that, at every step, the amount of generated artifacts is minimal and highly understandable.
The assumption is that this then makes a software maintainable with reasonable effort, even under the 
``extreme'' challenges reported in Section \ref{sec:intro}: rapid business and 
socio-technology changes, distributed development, and the need for software 
quality certification.

Similarly to CbC, MbC suggests to follow a \emph{top-down} approach to
build software starting with abstract descriptions. On the other side, MbC does not rely on
automatic transformations in order to produce progressively refined models (up
to an executable implementation) which is the core of a CbC design.
The MbC point of view is, on the contrary, to avoid the generation of concrete artifacts as far
as possible (e.g.~the generation of source code). 

To make it even more explicit: the underlying principle is that \emph{the less artifacts are needed 
by the system, the lower is it's maintenance costs}, and also the lower is the risk to introduce 
errors during restructuring operations. As a result, an ideal MbC approach consists of \emph{providing
only the missing pieces of information} that are needed to make the whole system executable. For MbC, 
it is of course fundamental to rely on some \emph{modeling language} that concisely describes
the core design of the system, and such a language should be semantically well defined and executable.

The design principle now goes hand in hand with a new software development methodology which we called 
``MbC Software Engineering Methodology'' (MSEM)\fs{is this ok with the fact that we talk about software (re-)engineering during the paper?}. MSEM is inspired by Model Driven Development since in 
it the software development team chooses to work using an (a) \emph{executable} and (b) \emph{semantically well defined}
modeling language (i.e. a 4th Generation Language or 4GL) to express a model of the software system itself.
Then, the model is linked to a set of \emph{concrete actions} (e.g. components developed
using existing programming languages (3GL), or remote webservices, \ldots ). And,
the link between the model and the actions is established by means of proper \emph{action interfaces}.

Two (partially overlapping) domains, where executable 4GLs have been vastly adopted, are Business Process Management (BPM)
and Service Oriented Architectures (SOA). Both use business process modeling languages such as BPEL, BPMN, or other workflow languages. The rationale behind it is that, in such domains, the core of the software system is the business process. The 
choice of a 4GL is also motivated by the fact that, when the business needs change, the business process manager may decide to
change the overall process and continue to use (in part or all) the basic activities already used in the previous process.

In our opinion, such principle can be adopted outside the BPM and SOA domains as well. The advantages in terms of software 
maintainability stem from the separation of the software system design description and the implementation details of the
required actions. In order to show the applicability of MSEM, we show how it has been adopted to solve a real-world problem
that does not belong to the domain of traditional BPM and SOA problems. But, before introducing the example, the next section introduces the methodology in more details.


\section{The Maintainable-by-Construction Methodology}
\label{sec:msem}


The maintainable-by-construction software engineering methodology (MSEM) has been inspired by the so-called \emph{simplicity}
principle of the Agile Manifesto, meant as \emph{``the art of maximizing the work not done''} \cite{Highsmith2001}.
At the same time, we want to avoid a common pitfall of agile methodologies,
namely the substantial absence of documentation describing the system
\emph{architectural specification} \cite{Selic2009a}.
\begin{comment}
meant as a \emph{``technology independent description of the higher-level structure
and behavior of the system''} \cite{Selic2009a},
we claim that EMBSE  methodology helps to provide such missing piece of information.
\end{comment}

Since MBSE requires modeling languages with a precise semantics, the software
engineers can make use of their advantages, including the application of formal 
verification techniques during the software verification phases. Just as an example, by
using MBSE, software architects may decide to express some high level system 
specification in a suitable temporal logic that in turn can be model-checked. 
When an in-depth system restructuring is required,
every formal specification could be used as a regression test (to verify again 
after completing the reengineering task).


MSEM is a variant 
of the classic spiral model and it applies the Maintainable-by-Construction design principle as 
introduced in Section \ref{sec:mbc}. It targets at software systems that \emph{evolve} and
it supports the notion of \emph{maintainability} as introduced by Godfrey \cite{Godfrey2008}. In this
sense, MSEM considers, right from the beginning, the eventuality of mutable requirements,
and it aims at minimizing the impact of such changes on the system structure.
A key feature of MSEM is that a single modeling language $\cal ML$ is shared
among almost all of its phases.

The MSEM process model is depicted in Figure \ref{fig:MSEM}; it consists of the following 
\emph{steps} and involves the following \emph{roles}:
\begin{itemize}

  \item \emph{Requirement Elicitation}:
  	here, the \emph{requirement engineer} collects the
  	stakeholders' needs, i.e. \emph{what} the system is supposed to do (i.e. functional
  	and non-functional requirements). The input might be the interviews with
  	stakeholders and the previous requirements documentation, in case of successive iterations.
  	The output is a new requirements document.\fs{document or documentation? we should be consistent with the following...}

  \item \emph{Analysis \& Design}:
  	here, the \emph{software engineer} identifies \emph{how} the system should operate.
  	In this phase, the designer takes the \emph{requirements documentation} as input and
  	produces the \emph{system model}.

  \item \emph{Design Validation \& Verification}:
  	in this phase the \emph{software engineer} attempts to detect, as early as
  	possible, design errors. Since the modeling language $\cal ML$ is required
  	to be semantically well defined, design validation and verification can be
  	done via simulations or through formal reasoning. The choice depends on the
  	application domain and costs-benefits analysis. 
    If formal verification is used, one wants also to choose a formal language 
    $\cal FL$ in which specifications will be expressed
    If the language $\cal ML$ is executable and its models are understood also by
    non-technical stakeholders, the latter can be involved during this phase, 
    in order to obtain a stronger feedback on the design.
  	The input of this phase is represented by the \emph{system model}, the 
    \emph{requirement document} and \emph{formal specifications}. 

  \item \emph{Reuse}: in this phase the \emph{software engineer} takes care
  	of identifying which one of the \emph{concrete actions} mentioned by the 
    $\cal ML$ model are
  	already available and which ones should be implemented from scratch. The output of this
  	phase is the \emph{action interface} document. If a test-driven approach is
  	adopted, during this phase it is possible to produce black-box \emph{unit-test 
    cases} for the concrete actions that will be coded in the next step.

  \item \emph{Coding}:
  	here, the \emph{software developers} take as input the \emph{action interfaces}
  	document and the \emph{requirement document}, to produce the \emph{concrete
  	actions} written in the programming language $\cal PL$.

  \item \emph{Testing}:
	here, the \emph{software testers} are responsible for testing the system. 
    During this phase \emph{unit-test cases} and \emph{integration-test cases} are 
    produced and executed, together with test cases produced during the 
    \textit{Reuse} stage. 

  \item \emph{Reverse Engineering}:
  	this phase should be used as entry point in MSEM for a legacy project built using
  	different software development methodologies. Here, the \emph{software analyst}
  	takes the original system \emph{source code}, together with its
  	\emph{system documentation} as input. The produced output is a \emph{system model}
  	written in $\cal ML$ together with a set of \emph{concrete actions} written in $\cal PL$.
  	
  \item \emph{Restructuring}:
  	here, the \emph{software analyst} takes the \emph{system model} and
  	\emph{concrete actions} produced during the \emph{reverse engineering} phase as input 
  	and his/her task then is to refine them. The outcome is a new set of \emph{system
  	models} and \emph{concrete actions}.

  \item \emph{Quality Assurance}:
    it is the sum of all the activies aiming at managing the quality of the software project 
    under analysis. Typically it involves the measurement of several artefacts' aspects. 
    For example, during the \textit{Coding} phase one might apply typical code 
    metrics, and during the Requirement Elicitation step the number of function
    points and their costs can be analyzed. During \textit{Testing} the number of
    bugs per lines or code coverage can be reported. There are measures and quality
    indicators for formal specifications \cite{Bollin13} that should be applied if possible, 
    and, there are also style guides to be followed \cite{Bollin14} to make sure that
    all the formal documents stay comprehensible. \fs{do we have a Coding/Forward Engineering phase, or just coding?}
\end{itemize}

\begin{figure}[t]
\centerline{
\includegraphics[width=0.9\linewidth]{../figures/methodology-ab.pdf}
}
\caption{\label{fig:MSEM} The MSEM software (re-)engineering cycle}
\end{figure}

By following the methodology, also a set of artifacts (see Figure \ref{fig:MSEM-data}) are produced, namely:
\begin{itemize}
  \item \emph{Requirement documentation}:
  	it can take various forms, depending on the application domain and project
  	needs. Two possible alternatives are a list of precise and concise functional
  	and non-functional system requirements, or a collection of user stories.
%	\cite{nuseibeh2000}.
  	
  \item \emph{System model}:
  	written in the modeling language $\cal ML$, it precisely describes the design skeleton of the
  	system. In this artifact, a description of the system core components and how they operate
  	is given at a sufficiently abstract level. The level of abstraction
	depends on the chosen $\cal ML$. Moreover, it should convey information
  	about what are the \emph{concrete actions} involved along the system execution,
  	and in what order and under which conditions they should be invoked. If actions
  	take data as input and output, $\cal ML$ should also clearly state how
  	data flows along the execution.
  	
  \item \emph{Action interface}:
  	it is the ``contact point'' between the abstract system model and the concrete actions.
  	In order to be fully executable, the system model needs to invoke existing actions (e.g.
  	remote services or components written in the programming language $\cal PL$,
  	such as methods, or functions). This interface is also responsible for declaring
  	the actions input and output.
  	
  \item \emph{Concrete actions}:
  	this is the set of all actions written in some existing programming language $\cal PL$.
  	They can be deployed locally or remotely as service. Concrete actions are
  	usually not directly executable applications, but pieces of code that, when
  	plugged into the proper context, can be executed and process the input data
  	to produce some result.

  \item \emph{Formal specifications}:
  	they collect a
  	set of formal specifications using some formal language $\cal FL$. Such specifications
  	can be described using known formalisms, like Z or various temporal logics.
  	In case of system reengineering activities, having formal specifications has a twofold advantage:
  	(a) it drives software engineers to consider whether such
  	assumptions on the system are preserved after the transformation, and
	(b) in case they are preserved, they can be verified again after the reengineering step(s).
  	
  \item \emph{Test cases}:
  	concrete actions can be considered as the building bricks of the overall system
	and, thus, they are expected to accomplish a well defined task. When the system
	model is linked to the concrete actions, the
	behavior of the overall system emerges. While concrete actions can be tested
	through unit-test cases written in the same programming language $\cal PL$,
	integration-test cases could be defined using some ``recording'' mechanism or
  	specialized languages.

  \item \emph{Metrics and reports}:
    this represents the collection of artifacts produced during the activities that
    compose the Quality Assurance step. Typically, such activities are scattered
    among the other steps of the methodology, and create a report for each such
    steps (e.g. test reports, source code metrics, requirement analysis, \dots).
  	In Fig. \ref{fog:MSEM-data} we represent with the big arrow the fact that any
    step can contribute to the Quality Assurance artifacts.

  \item \emph{Existing system}:
  	when dealing with reengineering tasks, this is the collection of all the 
    components written (using one or more existing programming languages 
    ${\cal PL}_1$, \ldots, ${\cal PL}_n$), together with any accompanying 
    documentation.
\end{itemize}


\begin{figure}[t]
\centerline{
\includegraphics[width=0.75\linewidth]{../figures/methodology_uml_artifact_2.png}
}
\caption{\label{fig:MSEM-data} Summary of the most important artifacts produced by the Maintainable-by-Construction
Methodology.}
\end{figure}

Fig. \ref{fig:MSEM-data} also depicts the relation between the methodology
phases and the artifacts produced. There, the cogwheels indicate the phases,
while the arrow direction indicates if an artifact should be considered as the input
or as the output of the corresponding phase.

It should be mentioned that software maintenance and reengineering is commonly 
divided into three stages:
\emph{reverse-engineering},
\emph{restructuring}, and
\emph{forward-engineering} \cite{Byrne1992,Chikofsky1990}.
In MSEM, the first two are explicitly named, while the third one is given by the
sequence of the remaining steps in the loop.
%
Both reverse-engineering and restructuring can be conducted manually or assisted by
a toolchain. Tools are available both for modeling languages $\cal ML$ or 
programming languages $\cal PL$ (e.g. see \cite{Mens2004} and \cite{Abadi2009}),
and whenever possible, they should be used, in order to automatize tasks and 
check consistencies of transformation, and focusing the software analyst's attention
on those maintenance activities and checks that are peculiar for the specific 
system under maintenance, and thus is usually not automatizable.\fs{check this 
paragraph}.

\begin{comment}
Furthermore, the reverse-engineering phase can be done manually or with the help of tools.
Since the process of software reengineering, by its very nature, must deal with
uncertainty and approximation, the engineers' supervision is fundamental during this phase.
Another aspect that must be taken into account when using MSEM for reengineering
is the choice of the modeling language $\cal ML$.
This choice depends on the available reverse-engineering tools for the original programming
language (e.g.~some use dynamic approaches \cite{Walkinshaw2007}, while others prefer
static analysis methods \cite{Abadi2012}).

Even during \emph{restructuring}, human supervision is fundamental.	Automatic tools,
though, can help to focus the analyst's attention on \emph{points-of-interest}
along the extracted $\cal ML$ models and $\cal PL$ concrete actions. When refactoring tools are
available for languages $\cal ML$ and $\cal PL$ (e.g. see \cite{Mens2004} and \cite{Abadi2009}
respectively), they should be used during this phase.
\end{comment}

\subsection{MSEM for evolving software}

Boehm and Lane classified the following different ways in which a software may evolve \cite{Bohem2010,Boehm2010b}:
\begin{itemize}

  \item \emph{Single step}:
  	the system is built from beginning to end, as the system
  	has a low probability of changing its functionalities.
  	
  \item \emph{Prespecified sequential}:
  	the system has a low probability of changing its functionalities, but the
  	stakeholders believe that waiting for the full system to be specified would
  	imply a loss of important mission capabilities.
  	
  \item \emph{Evolutionary sequential}:
  	the software development team produces an initial working product. The operational
  	experience provided by the deployed system will help to identify, design, and
  	prioritize the next system requirements.
  	
  \item \emph{Evolutionary overlapped}:
  	this step is comparable to \emph{evolutionary sequential}, but the next increment in the
  	systems functionalities depends on the availability of so-called \emph{increment
  	enablers} (e.g.~a more mature technology, an external system functionality \ldots).
  	
  \item \emph{Evolutionary concurrent}:
  	in this software evolution model, three teams are involved: \emph{system engineers}
  	identify and specify the next increments, from its requirements; \emph{developers}
  	stabilize the current increment; and the \emph{verification and validation team}
  	performs continuous defect detection.
\end{itemize}

With respect to the above classification, for single step and prespecified sequential
software evolution other methodologies should be preferred over MSEM (e.g.~those based on
the Waterfall or V-model).

The MSEM methodology, to our opinion, is more suitable for the three evolutionary 
models since in such contexts, it is possible to take advantage of the abstract 
representation of the system design.
Two types of changes are thus recognized: those impacting on the system design, 
and those that changes execution details at a more concrete level. While the former
may lead to modifications at the system model level, the latter are likely to cause
a refactoring at the level of the concrete actions. The MSEM methodology allows to
face every such type of change at the abstraction level that is more suitable, and 
as a consequence with more specialized tools. \fs{check this rephrasing}

In the case of evolutionary concurrent software, MSEM is particularly suited.
The three teams, in fact, can work separately (and with specialized tools) on the
system models and components: the \emph{system engineers} are responsible for collecting new
requirements (or modify existing ones) and underline their implications on the
design expressed in $\cal ML$; the \emph{developers} implement and fix bugs on the
concrete actions, while the \emph{verification and validation team}
is in charge of discovering weakness of the systems through formal specifications 
written in $\cal FL$, simulating executions of $\cal ML$ models or unit-testing 
actions coded in $\cal PL$.

As already stated, the software development activity is increasingly becoming
a distributed engineering activity \cite{Meyer}. The introduction of
a distinction between abstract aspects of a system and concrete actions
ease the distribution of work and synchronization about
the contributors. Moreover, the \emph{action interfaces} recognized during the $Reuse$ phase
can be used as contact point between the geographically dislocated teams
of designers and developers.

Finally, one of the main advantages of MSEM -- over other methodologies -- is
the low cost of migrating a non-MSEM project to an MSEM project.
As a matter of fact, the most appropriate modeling language can be chosen in such a way,
so that the new software models can be connected to the already developed software
libraries. This reduces the impact of the first software reengineering step (i.e.~from
a legacy application).
At this point, any further reengineering can be focused only on refining the
produced models and concrete actions of the obtained MSEM project. This is
very valuable, since customers of software industries typically asks to minimize 
the cost of modernizing and maintaining their existing software systems. 


\section{A Pilot Project}
\label{sec:casestudy}

%%\subsection{The problem}
\head{The problem}
\label{sec:cs-intro}
%
The last year, an industrial partner applied the MSEM methodology to solve a network monitoring
problem: given a wide-area network, there is the need to check the working status
of each core switch constituting the network backbone and the so-called last-mile.

An application was designed that runs on a \emph{monitoring server}. The application
monitors each network segment between the server and a core switch.
A network segment is called \emph{link} and the status of a link could be one of
the followings:

\begin{itemize}
  \item \emph{up}: the router is reachable, no IP packet is lost;
  \item \emph{degraded}: the router is reachable, but IP packets are lost;
  \item \emph{down}: the router is not reachable.
\end{itemize}

The application goal was to open a ticket signaling when one of the two faulty
link conditions (resp. \emph{degraded} and \emph{down}) to the technical staff, so that they can solve the problem as soon as possible.
%
A further requirement of the monitoring system was that, before opening a ticket, the
same link status should be tested three times, each time with two minutes delay inbetween.
%
Finally, after opening a ticket, the application should continue to monitor the
link, and in case the link switched to a non-faulty condition (link up), the
ticket should be automatically closed.

The application was first developed using a different software development methodology.
After that, it required to be reengineered twice.
At this point, the industrial partner decided to use this application as a pilot project
in order to evaluate the MSEM methodology.

This allowed us to test one of the key advantages of the MSEM methodology: namely, the
fact that it aims at minimizing the effort of converting a non-MSEM project into an MSEM one.
After the first reengineering process, the industrial partner continued to apply
the MSEM methodology and they claim that it considerably eased the second
reengineering process.

The application has been installed in 7 Italian telecommunications service providers,
two of which are nation-wide telecommunication companies. The monitored WANs range from a minimum of 4,000
monitored routers, to a maximum of 80,000. On the smallest network 90 tickets
per day are opened, while on the biggest 6,000 tickets per day are opened. On average,
97\% of those tickets are opened automatically, and through them the customers can be
proactively called by the customer care service and be notified when network problems happen.

%\subsection{The choice of languages}
\head{The choice of languages}
%
First of all, in order to apply the MSEM methodology, the industrial partner had to choose
which formal specification language ($\cal FL$), modeling language ($\cal ML$) and 
programming language ($\cal PL$) should be used.

Regarding the \emph{formal specification language} $\cal FL$, we decided to adopt
the Uppaal subset of TCTL as specification language. TCTL is a branching-time 
temporal logic for timed automata. This was motivated by the fact
the software system under analysis can be naturally thought as a distributed 
real-time system. The industrial company, already when designing the first version
of the application, started to think at it in terms of Statecharts with time
constraints. The choice of using TCTL to write the specifications, and Uppaal to
model check them, was quite natural.

%that XAL models can straightforwardly be translated into Uppaal models.
%This choice joined with the precise semantics of XAL make the formal verification
%of XAL possible by means of model checking.

Regarding the \emph{modeling language}, $\cal ML$ should be the most appropriate
to represent ``what'' the software must do at the right abstraction level
(see Section \ref{sec:msem}).
Since the project was about redesigning a real-time software, and one of the most
used models for such kind of systems is represented by Timed Automata
\cite{navet2010modeling}, this lead the company to adopt an executable modeling 
language called XAL, defined as an executable extension of Timed Automata 
\cite{...}. XAL encode the model files as XML documents (in according to a proposal
of W3C \cite{scxml}). The XAL language has a precise and well-defined 
operational semantics, therefore it can be used as an executable language.
Finally, pieces of code (in a given programming language $\cal PL$) or even other 
automata can be attached to each state (similar to nested states in UML State 
Diagrams). This is used to link the high-level abstract model to concrete actions.
%
Analogously to Timed Automata, each XAL automaton can change its state due to one 
of the following events:

\begin{itemize}
  \item an internal transition,
  \item the passage of time,
  \item the creation of a new running automaton,
  \item the synchronization between two automata.
\end{itemize}
%
Synchronization happens using send-receive rendez-vous, during which the
two involved automata can exchange data. Also when a new automaton is created,
it can receive some data. A detailed description of the XAL semantic rules is given
in \cite{Campana2008, Campana2010}.
%
Finally, the XAL execution environment allows to decide whether each running 
automaton should be executed as a different process or as a different thread, and 
how many threads should be mapped per process. XAL thus is suitable for designing 
concurrent systems.

Regarding the \emph{programming language} $\cal PL$, the industrial partner decided 
to use PHP. This choice has been motivated by the opportunity to reuse several 
existing libraries already used by the industrial partner.

XAL development is supported by an IDE based on Eclipse, allowing the developers 
to work on XAL models and PHP code uniformly. The generation of Uppaal models was not
automatized at the time of this pilot study, but since XAL models are basically
Timed Automata with added executable information, an automatic translation from
XAL to Timed Automata can be easily implemented. For this pilot study
the correctness of the translation was checked inspecting the source and target
models.

%\subsection{The initial system}
\head{The initial system}
%
The initial implementation of the application, introduced earlier in this
Section, was developed using a different, more agile, software development
methodology. The PHP language and its Object Oriented programming paradigm were
used.
The application was divided in two parts: a \emph{system monitor}, responsible for
probing the routers and store the collected results on a database, and a
\emph{ticket controller} in charge of reading the produced data, correlate them and
decide whether and what kind of ticket should be opened. Tab. \ref{tab:system_2004}
reports some metrics on the implemented system.

The development of the initial system required nine man-months, one software analyst and two software developers.

\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{3}{|c|}{ticket controller} & \multicolumn{3}{|c|}{system monitor} \\
\hline
LOC & Classes & Methods & LOC & Classes & Methods \\
\hline
835 & 1 & 8 & 7531 & 5 & 38 \\
\hline
\end{tabular}
\caption{\label{tab:system_2004}Statistics for the initial software system}
\end{center}
\end{table}

\head{The specifications}
Along the project, four fundamental specifications were identified. 
One is that the system does not reach a deadlock configuration, that can be expressed using the following Uppaal specification:
\[
\textit{AG} \neg \textit{deadlock} ~~ (S1)
\]
Another requirement is that it must be possible to open tickets for the \emph{down}
or the \emph{degrade} conditions.
\begin{eqnarray*}
\textit{EF}_{>=0} \textit{TICKET\_DOWN} ~~ (S2) \\
\textit{EF}_{>=0} \textit{TICKET\_DEGRADED} ~~ (S3) \\
\end{eqnarray*}
Another important requirement is that, for the same link, it is not possible to open
two different types of tickets, expressed in TCTL as follows:
\[
\textit{AG}_{>=0} \neg (\textit{TICKET\_DOWN} \land \textit{TICKET\_DEGRADED}) ~ ~ (S4).
\]  

%\subsection{The reengineered system}
\head{The reengineered system}
%
During the two rounds of reengineering, only the \emph{ticket controller} was
redesigned, while the \emph{system monitor} was kept unmodified.
During the first round, the reverse engineering MSEM phase was applied manually
and the restructuring phase was assisted by the XAL IDE.
In order to accomplish this task, the software engineers ran several simulations to
verify the correct behavior of the reengineered system, w.r.t.~the behavior of the
original system.
During the second round, only one new iteration of the MSEM methodology
was required -- without any reverse engineering step.

The first reengineering round was motivated by the need to introduce one more condition
to monitor in the network. To be more concrete, the company wanted to measure which of the
following condition holds at any moment:
\begin{itemize}
  \item \emph{full-band}: the router traffic is exceeding the bandwidth limit;
  \item \emph{not-full-band}: the router traffic is currently below the bandwidth 
        limit.
\end{itemize}
The ratio behind this requirement was the following: in case a network degrade
is detected and \emph{contemporary} the network traffic exceeds the bandwidth
limit, the system can automatically recognize the high network load as the potential
cause of the network degrade. Otherwise, the staff must investigate this in order to
find the cause of the network degrade.

This step introduced the need to \emph{correlate events} locally before
taking decisions globally.
Also, while the initial system was designed to sequentially test every link,
the company soon decided to redesign the application as a concurrent system in
order to minimize input/output waiting times and exploit modern multicore
architectures.

Table \ref{tab:spec} reports the results of the Design Validation and Verification 
phase of the first reengineering of specifications S1 - S4.

\begin{figure}[t]
\centerline{
\includegraphics[width=\linewidth]{../figures/1st_reeng.png}
}
\caption{\label{fig:1st_reeng} Overview of the XAL model after the first restructuring -- making the
complexity explicit.}
\end{figure}


\begin{table}[b]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Phase & Initial system & $1^{st}$ reeng. & $2^{nd}$ reeng. \\
\hline
Reverse-engineering & - & 1 & - \\
Restructuring & - & 2 & - \\
Requirements analysis & - & - & 0.5 \\
Design & - & - & 1 \\
Design V \& V & - & 1 & 0.5 \\
Reuse & - & 0.5 & 0.5 \\
Coding & - & 1 & 1 \\
Testing & - & 1 & 1 \\
\hline
Total & 12 & 6.5 & 4.5 \\
\hline
\end{tabular}
\caption{\label{tab:times}Development times (in weeks) for the case study}
\end{center}
\end{table}

In Tab. \ref{tab:times} the time required for the initial development
and the next two reengineering processes are reported, and
in Fig. \ref{fig:1st_reeng} an overview of the manually
reverse-engineered and restructured XAL model is given.
%%Even though for the sake
%%of space and clarity it is not worth to cover all its details, it still conveys
%%the complexity of the model.
For the sake of space and clarity, it is not possible to report all its details.
Nevertheless, the image still conveys the complexity of the
obtained model, composed of 28 states and 114 transitions. Such complexity is due to
the choice of explicitly representing the system correlation states. Nevertheless, the XAL model
clearly expressed the application correlation logic, and allowed design validation
by means of simulations. The model complexity problem has been easily overcome
during the second reenginering. PHP LOCs were reduced by an 80\%, as summarized in
Tab. \ref{tab:system_20082012}.
This is not a surprise, since a big part of the code was replaced by the adoption
of the XAL interpreter. This made considerably easier the software maintenance and
system debugging tasks.

As mentioned above, a second reenginering round was needed. It happened because of the following new requirements:
\begin{itemize}
  \item the number of checks to be done before opening a ticket should be
  	parameterized depending on the network type (e.g. LAN, WiFi, \ldots);
  \item when the link is in state (degraded, not-full-band), the router CPU load should also be checked:
  \begin{itemize}
    \item if greater than a given threshold (e.g. 20\%), a (degraded, not-full-band, cpu-high) ticket
      should be opened;
    \item otherwise, a (degraded, not-full-band, network-error) ticket should be opened;
  \end{itemize}
  \item the mentioned thresholds (e.g.~number of packets lost and CPU load) should
  	be parameterized depending on the network type;
  \item decision procedures should be made more \emph{modular} and the overall
  	correlation more scalable in case new decision criteria will be identified
  	in the future.
\end{itemize}

\begin{figure}
\begin{minipage}{\textwidth}
  \begin{subfigure}{0.45\linewidth}
    \centering
    \includegraphics[width=0.9\linewidth]{../figures/2nd_reeng_tdown.png}
  \end{subfigure}\\[1ex]
  \begin{subfigure}{0.45\linewidth}
    \centering
    \includegraphics[width=0.9\linewidth]{../figures/2nd_reeng_cldeg.png}
  \end{subfigure}
\end{minipage}
\caption{\label{fig:2nd_reeng}Two XAL models after the $2^\textit{nd}$ reengineering round.}
\end{figure}

\fs{if we introduced specific measures that deal with formal methods, refer to them
here to show that they can be applied and are useful to drive the software reengineering activity}
Table \ref{tab:system_20082012} shows some metrics of the reengineering collected
during the reengineering cycles of the ticket controller, while Fig. 
\ref{fig:2nd_reeng} gives an idea of how the XAL model in Fig. 
\ref{fig:1st_reeng} was
divided and how it looked like after the second iteration of the design phase, 
occurred during the second reengineering round. At the end, every XAL model has on
average 7 states and 10 transitions, which makes easier to understand them, as well
as debug and maintain the overall system. 
In order to validate the reengineered system, the specifications $S1-S4$ 
were verified again (see Tab. \ref{tab:spec} for the outcomes). 
In order to validate further the reengineered ticket controller, the reengineered 
system was executed in a parallel environment in order to observe its behaviors
and compare them with the previous version of the ticket controller. This was 
required because of course model checking can detect faults in the abstract design 
of the system, but not in the actual execution environment (e.g. the correct 
configuration of the operating system, or of the network).

\begin{table}[b]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|}
\hline
 & \multicolumn{3}{c|}{1st reengineering} & \multicolumn{3}{c|}{2nd reengineering} \\
\hline
 & Result & CPU Time & Memory & Result & CPU Time & Memory \\
\hline
S1 & yes & & & yes & & \\
S2 & yes & & & yes & & \\
S3 & yes & & & yes & & \\
S4 & yes & & & yes & & \\
\hline
\end{tabular}
\end{center}
\caption{\label{tab:spec} Design V. \& V. of first reengineering}\fs{complete}
\end{table}

\begin{table}[t]
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
& Initial system & $1^{st}$ reeng. & $2^{nd}$ reeng. \\
\hline
\# LOC & 835 & 163 & 284 \\
\hline
\# classes & 1 & 2 & 2 \\
\hline
\# methods & 8 & 17 & 26 \\
\hline
\# webservices & - & 3 & 3 \\
\hline
\# models & - & 1 & 7 \\
\hline
\# states & - & 28 & 45 \\
\hline
\# transitions & - & 114 & 72 \\
\hline
\end{tabular}
\caption{\label{tab:system_20082012}Statistics for the reengineered ticket controller}
\end{center}
\end{table}

\section{Related work}
\label{sec:rw}

%\fs{introduce early the rw of formal methods. reuse some RW from project, especially when discussing what good formal methods %can bring to sw eng. mention Hoare's certified compiler project, here or in later sections}

Two fundamental ideas this work (and experiment) is based on, are the use of
formal models (and formal specification languages), and the idea of 
minimizing the amount of artifacts produced.

Formal methods have a long history, and obviously, the straightforward idea of
minimizing the effort has been applied in several areas of 
Software Engineering \cite{Boehm2006}. In particular this includes:

\begin{itemize}
  \item software reuse, via component based and service based software engineering;
  \item model generation, where agile methodologies argue that the amount
  	of generated software models (mostly for documentation), should be minimized or
  	avoided at all.
\end{itemize}

Both fields have contributed a lot, but they focus more on forward engineering than 
on software maintenance.
As an example, the well known RUP methodology requires the software engineers to produce a considerable
amount of software models during preliminary analysis phases (e.g.~requirements
elicitation, CRC-cards \ldots), but in most of the cases such artifacts
do not help in reducing the amount of code generated later on -- thus increasing the
cost of maintaining the entire software projects, together with its artifacts.
Even agile methodologies (like Extreme Programming or AMDD) admit the use of
software models, but almost only for documentation purposes. Doing so, the information
contained in the original software models must be coded at a high level of details, and
consequently it is partially duplicated. This increases the size of
the software artifacts, which in turn increases the maintainability costs of
the software project again.

The MSEM software development methodology as described in Sec. \ref{sec:msem}
is related to several areas of Software Engineering research, but is also differs in
several aspects from them. The areas related to MSEM are:
\begin{itemize}
  \item Formal Methods.
  \item Model Driven Development;
  \item Software Reengineering;
  \item Business Process Management;
  \item Service Oriented Architecture;
  \item Distributed Software Development;
\end{itemize}

\emph{Formal Methods.}
Woodcock et al. collected several applications of formal methods to industrial projects \cite{Woodcock2009a}.
From their survey, two main types of software verification via formal methods emerge:
verification of specifications and models (e.g. via model checking, theorem proving \ldots)
and in-code verification (e.g. run-time assertions, contracts \ldots).
They also underline the effort of formalizing fragments of programming languages in
order to extract a semantically precise model of the software to be verified. MSEM,
on the other side, requires to use abstract languages that are already semantically
well-defined. As a consequence, MSEM suggests the adoption of formal methods to perform 
a design validation before any code is written for the actions.

\emph{Model Driven Development.}
Along with Model Driven Development (MDD), we believe software models should play a
central role in order to ensure higher quality and a more maintainable software.
%
Model Driven Architecture (MDA) is an architecture defined by OMG in order to realize MDD \cite{Kent,Deursen}.
It suggests a set of meta-languages to define domain specific languages and translation
tools. With respect to it, our work differs because it is focused on how software engineers should work.
Applying MSEM, the software engineers may even decide to apply tools suggested by MDA.
%
Another MDD approach is Executable UML (xUML) \cite{Mellor2002}.
xUML specifications consist of a \emph{class diagram},
a \emph{statecharts} diagram, and the \emph{state procedures}.
With respect to it, we share the dichotomy between the executable modeling language and
the invoked activities.
On the other side, MSEM does not mandate the use of any modeling language,
while xUML heavily relies on UML. We argue that such choice should be left to the
developing team, since modeling languages are part of the technology available at
the moment and thus subject to obsolescence.
Furthermore, with MSEM, it is possible to employ domain-specific modeling languages,
that is recognized as one of the key advantage of MDD by several authors
\cite{Deursen,Hailpern2006,Selic2003}.
%
Both, xUML and MDA, focus on automatic generation of executable code from the
models. This introduces two main issues:
(a) models must be synchronized with the actual executing code, and
(b) while software is generated at the abstract level of the chosen modeling language,
it will be debugged at the concrete level of the generated code.
With MSEM, on the contrary, the composition of models and concrete actions can be
directly executed without further translations. This avoids the generation of unused
artifacts and, with the help of the runtime environment, it allows to debug
each component of the system adopting the same abstraction level employed during its
design and implementation.
%
An alternative approach is provided by Agile MDD \cite{Ambler2003} (AMDD).
AMDD, though, mainly focuses on the integration of modeling technologies to XP software
development.
In it, the adoption of models is suggested mostly for documentation
purposes and partial code generation, sharing the weakness already underlined for
xUML and MDA.

\begin{comment}
Some
authors, in fact, point out that Model Driven approaches usually don't face the
problem of integrating the code obtained by applied transformation, with already
existing code (sometimes referred as ``foreign code'' \cite{Deursen, France2007}
\pdfcomment{add citations}.
For example, using our methodology, it is possible to orchestrate local
actions with remote services in a ``more natural'' way (they both are thought as generic actions).
In MDD with xUML everything that is not local must be ``wrapped'' in some object,
and this represents a small forcing imposed by the technology, not required by the
problem.
\end{comment}

\emph{Software Reengineering.}
Software Reengineering studies several approaches and aspects relevant to legacy
software modernization. A widely accepted approach divides the process in three-steps:
\emph{reverse-engineering}, \emph{restructuring}, \emph{forward-engineering}
\cite{Yu2005}.
We argue that, after a legacy software system has been reverse-engineered
and the obtained model has been restructured to fulfill the new requirements,
no code should be generated from the model during the forward-engineering phase.
If new functionalities are needed, they should be defined as concrete actions
invoked by the model. This eliminates the need for future reverse-engineering phases,
when the system will be reengineered again.

\begin{comment}
that are error prone, if conducted automatically, and tend to introduce ``noise''
into the extracted models.
The latter is especially evident if some concrete examples
from the literature are analyzed. When a finite-state model is (semi-) automatically
extracted from the source code, usually generated state names are not easy to
understand, or added states result to be not informative as too concrete or too abstract
\cite{Choi2010} \pdfcomment{add citations}. While manually labeling the automatically
extracted model can be considered an affordable cost, it seems a waste of time to regenerate
source code from such models, that are likely to be reverse-engineered again.
Our argument is that, whenever possible, it should be preferred to execute the
model directly, avoiding the need to keep it synchronized with the actual code.
Indeed, the model \emph{becomes} part of the actual code.
\end{comment}

\emph{Business Process Management (BPM).}
The BPM research area, as already stated, provided a strong motivation for our work.
For instance, in BPM, reengineering means the adaptation or change of business processes
according to some new or modified needs \cite{Ko2009}.
Nevertheless, MSEM has been thought for a set of applications vaster than
business process management.
As a matter of facts, this work shows an application that is outside the scope of usual BPM.
%
Furthermore, our approach is more focused on software reengineering.
Indeed, it explicitly takes reverse-engineering for legacy applications into account.
On the other hand, this does not mean that technologies developed for BPM cannot be used in MSEM.
For example, languages such as BPEL and BPMN (2.0) can be used in MSEM since they both are executable.

\emph{Software Oriented Architecture (SOA).}
SOAs deal with issues such as service orchestration and choreography, among others.
SOAs require the adoption of specifically designed middleware and protocols to
deploy services. We believe MSEM can fit the SOA systems development: employing
a proper modeling language, the invoked actions could be external services,
and the overall software system could be deployed as a service.
With respect to some MDD based SOA development methodologies \cite{Leotta2012a},
MSEM shares the same advantages already mentioned for xUML, MDA and AMDD.

\begin{comment}
\fs{clarify}
MSEM aims at being more flexible, and the added flexibility
\fs{how to clarify what we mean by "flexibility"?}
make it possible to apply MSEM also to those IT infrastructures that cannot afford the complete renovation usually
imposed by a full paradigm shift, such as SOA. This kind of reluctancy to in-depth changes of the
IT infrastructure are common, in real-world IT companies, either for technical
reasons or for company policies.
On the other side, we retain the value of
interoperability across organizational boundaries: software systems produced with
our methodology can easily be exported as services as well as invoke already defined
external services. One of the first advantages of our ``mixed'' approach is that
it is easier to mix-up invocations of external services with internal already
defined procedures (or legacy code). In other words, there is no need to wrap
existing business logic in a service, but (at least in principle) it can be
invoked as such. This of course avoids the introduction of new bugs \fs{why?}
and decreases the costs maintaining the (re-)engineered software system
\fs{add references?}
\end{comment}

\emph{Distributed Software Development}
Software engineering researchers and practitioners are increasingly focusing on how
development methodologies should be adapted to face issues due to
a distributed development \cite{Nordio,Hawthorne}.
Field experiences underline the importance of good synchronization and
the need for un-ambiguous communication, especially for software requirements and design decisions.
MSEM divides the source code in three main artifacts (the models, the actions, and the interfaces between them),
therefore, it can be successfully applied to coordinate work in distributed software projects.
Furthermore, MSEM can be applied together with Design-by-Contract paradigm, that
proved to be useful for distributed software development \cite{Nordio}:
action interfaces can be defined as contracts (with preconditions, postconditions, and assertions),
whereas actions can be coded with languages supporting the Design-by-Contract
approach.


\section{Discussion}
\label{sec:discussion}

In this paper we introduced a software development methodology named Maintainable-by-Construction Software
engineering methodology (MSEM). It is an attempt to integrate aspects from several research areas such as 
Model Driven Development, Software Reengineering and Software Verification.
The methodology itself is intentionally described in rather general terms so that it can be applied to a 
wide variety of software projects. Nevertheless, the successful application of MSEM to a very specific 
software system is greatly influenced by the chosen modeling language and the original application domain.

It should be mentioned that the description of any specific software analysis
tool (to convert the original source code to the chosen modeling language) 
was not in the scope of this paper. The reverse engineering community provided a lot of solutions over
the years, each of it with its specific advantages, limitations, and costs.
It always has to be up to the software engineer to decide about what is best to use.
MSEM is designed to be agnostic about the specific translation tool.%\fs{cite papers}

\fs{why "one of our industrial partner"? we only have one...}
In order to see whether MSEM works or not, one of our industrial partners decided to adopt it
for a real industrial case in a pilot project.
At the end of this pilot project, the application has been satisfyingly installed by several customers
and, therefore, the industrial partner decided to adopt the methodology as it's standard development procedure.
Regarding the pilot project, the methodology has been applied using XAL as modeling
language, PHP as programming language, and TCTL as formal language.
Nevertheless, the methodology is agnostic regarding the selection of languages. If 
the chosen modeling language permits to express different execution threads, MSEM can assist to reengineer
complex software systems to a distributed design. One of the characteristics
of the MSEM methodology is that, in principle, multiple programming languages
$\mathcal{PL}_1 \ldots \mathcal{PL}_n$ can be adopted to code different concrete actions. This
feature is particularly useful when planning an incremental migration of the actions
from a given programming language to a different one.

One drawback, however, is that the chosen pilot project has been manually reverse engineered.
The reason is that the XAL language does not provide a stable model extraction tool, yet. On the
other side, the project size was reasonably small, and the previous version of the
system has been realized by the same developers, justifying the manual approach.
The industrial partner is currently working on a prototype algorithm to automatically
translate legacy code (e.g.~Cobol \ldots) to XAL diagrams, in order to assist
the task of reverse engineering existing software projects developed by its customers.

A key advantage of MSEM w.r.t.~other common Model Driven Development approaches
derives from the choice to not generate code. In fact, many authors of MDD
see the advantage of 4GL software models being compiled onto a 3GL source code
(in the same way as 3GL are compiled onto executable assembly code \cite{Pastor2008,Deursen,Hailpern2006}).
While we share this vision, from a practical point of view we believe that this
should be achieved letting the model interact with components written in some
existing 3GL. If MDD sees 3GL source code as ``low'' level code, many procedures
and sequences of operations can naturally be expressed using existing programming
languages. The introduction of the UML Action Language to describe Executable UML actions,
for example, may be a relevant obstacle for IT companies and practitioners:
their employees, in fact, are probably already trained to use general programming
languages but should learn to use a new, very specific, language such as UML Action Language.
The main argument for the adoption of an Action Language
is notably that it is abstract and platform independent. On the other side,
MSEM allows the software engineers to develop their projects
using modern 3GLs that are provided with multi-platform compilers
and interpreters.

\begin{comment}
First of all, let assume a traditional software project is composed of a subsystem A written in
assembly language, and a subsystem B written in a some 3GL programming language
that invokes components in A.
In this case, source code in B is required to declare a minimum interface towards
the invoked components defined in A. The compilation of A and B, though, are
separate tasks, and the link and run-time environments are in charge of verifying the
interface between A and B is correctly declared. Also, the software engineer is
not required to compile B (the more abstract code) and then A (the more concrete
code). MDD approaches, on the other side, applies the latter approach, and this
is, in our opinion, the reason why they fail to maintain their promises in terms
of better maintainability and higher quality software.

A second important aspect to consider is that artifacts obtained from 3GL code
compilation are temporary files needed as long as they provide support for the subsequent linking
step that will produce the final executable code. This means that such intermediate
representation don't need any form of maintenance. The MDD approaches, on the other
side, produce 3GL source code that must be maintained with the rest of the project
even though it is suggested not to modify it.


The MSEM methodology tries to overcome these pitfalls. Avoiding the generation of
any lower level artifact from its models, we claim that MSEM correctly implements
the desired metaphor of compiling software models as we now compile 3GL source code.
From the technical point of view, defining the interfaces between models and actions
code required by MSEM requires the same effort as defining the interfaces that
MDA and other MDD approaches employ to let the interaction between its models and
manually crafted source code.
\end{comment}

The MSEM methodology aims at being a lightweight approach to MDD, the choice of the
adopted modeling and programming languages is delegated to the software analyst in order
to make it suitable to his or her needs. In principle, even components
written in different programming languages can be coordinated by the interpreter
of the chosen modeling language. If desired, the toolsets adopted by MDA and Executable UML
can be integrated within the MSEM methodology without problems.


\section{Outlook}
\label{sec:outlook}

Analyzing the experiences collected during this project, the industrial partner
found the introduction of the model checker particularly useful because it helped to
detect earlier some design mistakes. The industrial partner could then fix them 
already during the Design Validation \& Verification step. Without the model 
checker, and since they were 
overlooked by the designer, such errors would have been discovered only at the 
testing phase, or later, incrementing the cost of fixing it.

One limitation that we had to face during the verification phase was the 
introduction of concrete parameters in order to make the verification task feasible
(e.g. we considered scenarios with a fixed number of processes, or cycles that can be
taken a bounded number of times). An open question is whether the same properties
can be checked with respect of networks with any (finite) number of processes 
or with with respect to larger values of the model parameters. These problems are
colled, respectively, parameterized model checking and parametric model checking).

A final issue that is still open is the integration of suitable 
measures to the methodology. However, we plan to take a closer look 
at quality and complexity measures and, finally, to come up with 
some recommendations for future projects.

%\bibliographystyle{IEEEtran}
\bibliographystyle{abbrv}
\bibliography{2015-icst}

%%\balancecolumns

\end{document}
