\documentclass{acm_proc_article-sp}
\usepackage[utf8]{inputenc}

\begin{document}

\title{ESEML - Empirical Software Engineering Modelling Language}

\author{
\alignauthor
Bruno Cartaxo, Ítalo Costa, André Santos, Sérgio Soares\\
\affaddr{Center of Informatics - Federal University of Pernambuco}\\
\affaddr{Av. Professor Luís Freire, s/n, Cidade Universitária}\\
\affaddr{CEP 50740-540, Recife/PE/Brazil}\\
\email{bfsc,imac,scbs@cin.ufpe.br}
}

\maketitle
\begin{abstract}
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor
incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis
nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu
fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in
culpa qui officia deserunt mollit anim id est laborum.
\end{abstract}

% A category with the (minimum) three required fields
\category{H.4}{Information Systems Applications}{Miscellaneous}
%A category including the fourth, optional field follows...
\category{D.2.8}{Software Engineering}{Metrics}[complexity measures, performance
measures]

\terms{Theory}

\keywords{ACM proceedings, \LaTeX, text tagging} % NOT required for Proceedings

\section{Introduction}
Researches in Software Engineering normally propose new processes, standards,
structures, tools, languages, or practices for software development,
typically in order to increase productivity and quality of products and
services delivered. However, technology transfer is hard to achieve and
empirical studies tries to help in moving technology from academy to industry
\cite{Wohlin:2000:ESE:330775, refs}.

A great part of these researches that propose new methods, fail to deliver
empirical evaluation. Only through these evaluations it is possible to
establish whether the proposed technique is efficient, effective, and in which
context the technique can be applied \cite{BookESE, Wohlin:2000:ESE:330775,
Sjoberg:2005:SCE:1092717.1092851, Sjoberg:2007:FEM:1253532.1254730},
therefore easing technology transfer. According to Sjøberg et al
\cite{Sjoberg:2005:SCE:1092717.1092851}, among 5,453 scientific articles
published in 12 of the main journals of software engineering between 1993 and
2002, only 1.9\% had a controlled experiment involved.

In order to make any kind of empirical evaluation, it is necessary to point out
empirical methods and techniques. Also, it is possible to adapt these in the
context of software engineering. Empirical Software Engineering includes several

types of studies, such as Surveys \cite{REF}, Case studies \cite{REF}, Secondary
studies (mapping and systematic reviews) \cite{systematic-review-guidelines-
2007} and Controlled experiments \cite{Wohlin:2000:ESE:330775, systematic-
review-guidelines-2007, Sjoberg:2007:FEM:1253532.1254730}.

Each one of these studies has their own characteristics and should be used
in specific contexts. In this paper we will focus on controlled experiments,
which is the most controlled technique and is also commonly used within a very
specific context \cite{Wohlin:2000:ESE:330775, Sjoberg:2007:FEM:1253532.1254730}
. Therefore, such controlling and specificity use to compromise experiments
generalization. On the other hand, experiments are an excellent technique to
deeply validate the effects of each small change in the observed environment,
not requiring real world conditions, which are often difficult to obtain in
academic research \cite{BookESE, Wohlin:2000:ESE:330775}. Thus, a controlled
experiment can be a good preliminar indicator to pinpoint if a technique, method
or process in software engineering does what it claims it does. Additionally, it
creates a setting for even more applied and less controlled empirical research
conditions, such as surveys and case studies \cite{BookESE}.

To conduct a controlled experiment it is necessary to bring together a wide
range of skills that often create a barrier for adopting the technique. The
conduction of an experiment requires skills in the matters we want to check
(software engineering in this case), experience with technical terminology,
statistics, as well as expertise in designing the experiment. Above all, the
controlled experiment itself is a very particular knowledge domain.

In order to delimit, define, and accelerate the development of solutions in
a specific field, the concept of domain-specific languages (DSLs) has been
created. Those languages tend to be very expressive and natural, even for
people without previous programming knowledge. DSLs are largely used to express
problems within a specific domain in a natural and fluent way.

Since DSLs are good alternatives to model solutions in a specific domain, adding
the fact that controlled experiments have their own domain vocabulary, it is
possible to note that the definition of a DSL is adequate to the problem of
modeling and conduction of controlled experiments in software engineering. It
mitigates social barriers between stakeholders in the field we want to validate,
the team of statisticians, the experiments designers, and the domain expert.

In the face of all those previously explained reasons, this paper presents the
definition and development of a visual DSL for modeling controlled experiments
in software engineering. In this case, we are going to generate the experiment
plan from an instantiation of our domain model. This DSL is the kickoff for a
major research initiative that is the development of a platform for conducting
empirical studies in software engineering. Once that kind of platform and
computer-aided systems are extremely necessary tools to increase the volume and
quality of empirical studies in software engineering \cite{4492892}.

\section{Related Work}
Initially, an informal review of the literature looking for similar studies
has been done. No studies involving the definition of a DSL and a Language
Workbench for modeling empirical studies in software engineering were found.
On the other hand, it was possible to find some tools focused on supporting
the conduction of empirical studies in software engineering. Torii et al
\cite{799942} presented a Computer-Aided Empirical Software Engineering (CAESE)
framework and Ginger2, a partial implementation of this framework, that aims to
support all phases of a empirical study in software engineering. Punter et al
\cite{1237967} shows the advantages of online surveys making use of some web

survey tools management systems like Globalpark iSurvey, or eSurvey. Bandara et
al \cite{quteprints42184} proposes an overall approach to conduct systematic
literature review in the context of information systems making use of NVIVO, a
qualitative data management tool, and ENDNOTE, a personal reference database.

\section{Empirical Software Engineering and Controlled Experiments}
The use of scientific method involves the comparison of theories and techniques
with reality in order to verify if those are valid enough to be taken forward. A
major problem in the current scenario is that software engineering has used time
as a parameter for validity of its theories in the detriment of confrontation
with reality through experimentation \cite{BookESE}.

Since it is critical to use empirical methods to evaluate theories for software
engineering, it is also necessary to master technologies for conducting the
studies, as well as expertise to overcome the problems inherent to them. Some
of the impediments to the systematization of empiricism in software engineering
are: lack of familiarity with the scientific method, lack of experience in
analyzing the data, and lack of comprehension to deal with the burden related to
the human factors in software development.

In order to remove or at least reduce the presented obstacles, some actions are
necessary. If software developers are not familiar with the scientific method,
it is interesting to present them with positive results in other disciplines
such as engineering and medicine. This would be a way of showing that the cycle
of hypothesis confrontation with the reality is of great value to have a better
understanding of the construction of a software. In the case of developers
lacking the experience to analyze the data, just show them that the necessary
statistical and mathematical foundation is part of their own education as
engineers. If the experiment examples are more common in other areas such as
agronomy and medicine, a good solution is to increase the volume of empirical
studies in software engineering to create a body of knowledge and appropriate
terminology. Finally, if human factors are strong confounders in the development
of software, we can make use of knowledge from human sciences in conducting
empirical studies in order to control and minimize such biases.

Once the obstacles are removed or controlled, it is possible to say that we are
imbued with the purpose of conducting an empirical study. So the next step is
to select which type of study is best suited to our situation. For now, it is
important to know the categories of empirical studies. Thus, we have two types
of classifications for empirical studies: according to the nature of the data
and regarding the process of conducting the study. Concerning to the nature
of data, there are qualitative and quantitative studies. On the strategy of
conduction, there are surveys, case studies, secondary studies, and controlled
experiments.

Quantitative studies investigate the relationship between the numerical
variables being examined. Qualitative research, on the other hand, tries to
understand the objects in their natural state without having to establish
numerical relationships \cite{BookESE}.

In addition to the types of studies by the nature of the data, there are studies
on the strategy of conduction. Surveys, for example, are investigations made
in retrospect, typically through interviews and questionnaires, when some
method or tool is already being used, seeking to interpret the results to
generate descriptive and explanatory conclusions. Case Studies, on the other
hand, are widely used to monitor projects and activities during its execution,
collecting data and doing statistical analysis without much control over the
observed environment. Secondary and tertiary studies, such as systematic

reviews and mapping studies are made up all the empirical research done
in a particular area of knowledge aiming to organize and take nontrivial
conclusions about the subject in question \cite{systematic-review-guidelines-
2007}. Finally, controlled experiments are usually careful examinations made
in a laboratory with a high degree of control. These studies are intended to
manipulate variables and to observe their effects, to perform statistical
analysis and draw conclusions about the impacts of variables in their contexts
\cite{Wohlin:2000:ESE:330775}.

The experimentation process is done at different levels by various groups within
software engineering community, which means that each group must take a specific
responsibility in verifying knowledge. The first link in the chain is the
researcher, which proposes theories, methods, tools and techniques to address
problems. This level of testing is most commonly performed in laboratories
at highly controlled environments, in contrast to the studies in the real
world. However, this first level of research is extremely necessary to answer
preliminary questions such as: Does the methodology have some effect on the team
productivity? Does the programming paradigm X has an impact on the readability
and maintainability of the code? These are some of the recurring questions in
software engineering \cite{BookESE}.

In order to obtain reliable results when conducting a controlled experiment,
it is important to follow a well defined systematic process. When we say
process, by any means, we should understand it as guide to support some
activity from the very beginning to its own end. Within this perspective we
can characterize the process of conducting a controlled experiment with the
following phases: definition, planning, operation, analysis, and presentation
\cite{Wohlin:2000:ESE:330775}.

During the definition phase, the researcher is concerned with the experiment
setup, in terms of problems and objectives. Then, in the planning phase,
the researcher must specify the design of the experiment itself in terms of
variables, treatments, and threats to validity. Later, during the operation,
when the experiment is actually going to be executed according to a plan and
collects the data originated by the execution. With data in hand, the next
step is to perform an exploratory analysis of these data, which is to perform
a statistical analysis and further hypothesis tests of these. Finally, the
researcher must present the analysis and further conclusions for the experiment
\cite{Wohlin:2000:ESE:330775}.

In addition, it is still necessary for the researcher to know the terminology
inherent to the field of experimentation. With these terms in mind, the
researcher will be able to map each concept to the experiment to be performed.
Some key concepts are: experimental unit, experiment participants, response
variables, parameters, factors, levels, block variables, validity, and others
\cite{REFERENCIA_WHOLING E NATHALIA}.

The experimental unit or experimental object is the entity who ``suffers‘’ the
execution of the experiment and can only be well defined in accordance with the
objectives of the experiment. Patients are typically the units of a medical
experiment. A software project or a phase of a development process can be the
experimental unit of a software engineering experiment.

Participants are those individuals who apply, in the experimental unit, the
techniques and methods that are being tested. In some branches of knowledge, the
participant exercises little or no influence on the outcome of the experiment.
However, it is known that in software engineering participants are key players
influencing both positively and negatively in the results of the experiment.

Response variables or dependent variables are typically the outputs (or results)
of an experiment. In case of controlled experiments, these variables are
typically quantitative. The execution time of an algorithm may be a response
variable within the context of an experiment that evaluates performance on
multiple implementations of such algorithm.

Parameters are characteristics fixed at a given value, so they do not vary
throughout the experiment execution’s process. In order to compare the quality
of code to implement a problem using and not using design patterns, we could
set as a parameter the programming language to ensure that the improvement in
execution time would be inherent to the use of the pattern and the experiment
would be valid within the scope of the language set as parameter.

Factors are features that intentionally vary between executions of an
experiment. Thus, the understanding of their effects on response variables is
exactly the goals of an experiment. Levels (or alternatives) are the values that
a factor can assume. If an experiment is conducted to assess which language is
more efficient to solve a given problem, one factor could be the programming
language, and the alternatives could be Java and C++.

Blocking variables are undesired variations that occur during the experiment
and, thus, cannot be fixed as parameters. However, blocking variables influence
the response variables and in many cases invalidate experiments. An experiment
focused on assessing the quality of a code written in some particular language
can be strongly influenced by the experience of the developer. Therefore,
experience may be a candidate to be a blocking variable \cite{BookESE}.

Another key concept in the scope of experiments is how valid are their results.
A misguided planning, execution or data collection may completely invalidate
the results of an experiment. For that reason, it is of important to ensure that
all aspects that guarantee the validity of a experiment are performed correctly.
It is common to define the existence of four types of validity: conclusion,
internal, construction and external \cite{Wohlin:2000:ESE:330775}.

When the basics are known, as well as the process of experimentation, we are
already able to conduct an experiment. Recalling those after conducting a
controlled experiment is very important to replicate it. Replication of an
experiment is to run it again, preferably by another group of researchers, to
ensure the validity of the results and adjust possible mistakes in planning,
analysis, and conclusions.

\section{Domain-Specific Language}
According to \cite{Fowler2010}, Domain-specific language is a computer
programming language and, like any other language, is as a way of manipulating
abstractions. Domain-specific languages work as their name says, they are
languages of limited expressiveness focused on a particular domain.

DSLs are composed by four key concepts: Like any other computer programming
language, it should run on a computer attending to a purpose; Every language
should have a sense of fluency where the expressiveness comes not just from
individual expressions but also from the way they can be composed together.

A general-purpose programming language provides lots of capabilities: supporting
varied data, control, and abstraction structures. All of this is useful but
makes it harder to learn and use. A DSL, on the other hand, supports a bare
minimum of features needed to support its domain. You can't build an entire
software system in a DSL; rather, you use a DSL for one particular aspect of a

system.

Martin Fowler states that a limited language is only useful if it has a clear
focus on a small domain. The domain focus is what makes a limited language
worthwhile.

There are three different kinds of DSLs: Internal DSLs, External DSLs and DSL
Workbenches. A DSL within a general purpose programming language is an Internal
DSL (i.e.: Regular Expression). A DSL that runs separate from the language of
the application it works with is an External DSL (i.e.: XML). External DSLs
will usually be parsed by a code in the host application using text parsing
techniques. When you have a specialized IDE for defining and building DSLs,
used not just to determine the structure of a DSL but also as a custom editing
environment for people to write DSL scripts, you have a DSL Workbench.

\subsection{DSL Lifecycle}

Martin Fowler states that a common alternative is to define the DSL first. He
mentions that you should begin defining some scenarios and the way you'd like
the DSL to look. Additionally, he emphasizes the presence of a domain expert
during the construction of the language: ``This is a good first step to using
the DSL as a communication medium’’.

When you sit down with some people who understand the customers' needs, you come
up with a set of controller behaviors, either based on what people wanted in the
past, or on something you think they'll desire. That is the input you need to
create a way to write it in a DSL form.

For every interaction in this workflow, you'll modify the DSL to support
new capabilities. By the end of the exercise, you'll have worked through a
reasonable sample of cases and will have a pseudo-DSL description of each of
them. Once you have a representative set of pseudo-DSLs, i.e. a representative
set of features for a given domain, you can start implementing them.

\subsection{DSL Workbench}

For the present study, we will be constructing a DSL Workbench to boost
productivity in the experiment execution, minimize flaws in the experiment
modeling, and to mitigate the social barriers in the understanding and sharing
of knowledge among stakeholders in controlled experiments. The first two goals
can be achieved by systematizing the process of controlled experiments. The
third goal, which is our focus, can be achieved by a DSL Workbench.

According to Martin Fowler, a DSL workbench is an environment designed to help
people create new DSLs, together with high-quality tooling required to use those
DSLs effectively. This visualization representation is similar to the DSL itself
in that it allows a human to understand the model. The visualization differs
from the source in that it is not editable, but on the other hand, it can do
something an editable form cannot, such as a render diagrams.
Communication with the customers and users is the most common source of project
failure in software development. By providing a clear yet precise language to
deal with domains, a DSL can help improve this communication. Additionally,
Martin Fowler states: ``I do think DSLs can improve communication. It's not that
domain experts will write the DSLs themselves; but they can read them and thus
understand what the system thinks it's doing. By being able to read DSL code,
domain experts can spot mistakes. The biggest gain from using a DSL in this way
comes when domain experts start reading it Involving domain experts in a DSL is
very similar to involving domain experts in building a model. I've often found

great benefit by building a model together with domain experts; constructing a
Ubiquitous Language \cite{evans-ddd} deepens the communication between software
developers and domain experts’’ .

\section{ESEML}
In order to develop the DSL for the present study, our work started based on the
lifecycle defined by Martin Fowler to create Domain Specific Languages. Instead
of starting by the Domain Specific Language itself, a preliminary review of
models, ontologies and other formal representations for Controlled Experiments
has been done. Our objective was to provide a rationale of concepts and their
relations for Controlled Experiments. The idea was to define our Domain Model
first and then proceed with the DSL itself.

Based on concepts, entities, and relations raised by relevant studies, we came
to a leverage for our model. The model presented (see a part of the domain model
below) is a set of concepts critical for the planning phase of a controlled
experiment. Our starting point for the implementation of our DSL was a case of
creation, through code generation, of the Experiment plan.

We chose DSL Tools from Microsoft as the tool to create the DSL. When using DSL
Tools, in order to create your own DSL, you needed to define your domain model
first. That’s why We had to proceed with a preliminary definition of the model,
and implement the whole DSL later. After the conception of the model, the next
step was define the visual representation of the DSL(see example of the visual
representation below). Just like the lifecycle mentioned by Martin Fowler, A
couple of interviews with experienced researchers and further validation with
the proposed visual representation were done. Our objective was to follow the
first step mentioned by Martin Fowler, to start using the DSL as a communication
medium with the stakeholders.

These interviews led to the conception of new features and ideas for the
DSL. Anyhow, we chose not to add new features, for exceeding time and scope
limitations, and sticked with our preliminary domain model. Additionally, no
semantic validators were implemented due to the same restrictions.

The present work is related to a course, which is part of a Computer Science
Postgraduate program at the Federal University of Pernambuco. The objective
during the course was to propose a programming language to be developed in six
weeks. Thus, these conditions imposed some serious time and scope limitations
for our work.

In these terms, we focused our work on implementing a language where a
researcher could represent all the data needed to allow the automated generation
of an experimental plan. Microsoft DSL Tools Framework \cite{Cook2007} uses T4
transformation templates to generate code. A set of conditions and rules has
been defined to iterate through our Domain model structure and transform the
model into a PDF document (See T4 transformation model presented below). The
document is intended to contain all demanded to the Experiment plan.

After defining your controlled experiment through the visual representation of
our DSL, you are just one click away from generating the Experiment plan.

\section{How it works}
By using the ESEML Workbench, the user can instantiate his own experiment from
a pre-defined model for controlled experiments. The whole idea of our domain 
model was to summarize, through a rationale of models in software experimentation,
the entities involved in a controlled experiment(See the ESEML domain model below).

For instance, the user can define: both null and alternative hypothesis, factors,
related treatments, parameters, dependent variables, subjects, experimental units,
adherence tests, hypothesis test, define threats and validations(internal, external,
conclusion and construction) or even define a Goal-Question-Metrics structure for the
experiment(See toolbox below).

Our objective was to define every element and its relationships in a model reflecting
your own controlled experiment. Thus, in a GQM structure, for example, a controlled 
experiment is linked to a goal which has an embedded relationship with questions, every
question also has a set of metrics(See example of model below).
   
Elements within the ESEML model have properties that can be defined during the
construction of the experiment model. The final graph that represents the experiment
is going to be syntatically and semantically validated(See error notification below).
Thus, in order to minimize errors during the configuration of the experiment, ESEML 
will be capable of prompting the user for possible threats for the validity of the 
experiment, also it can identify confounders or even show that questions are not 
based on any metrics.

The model is going to be a precise representation of how the experiment is going to work.
Thus, it might work as an effective communication channel between different stakeholders in the
experimentation process mitigating dubieties and conflicts in the experiment configuration.
In that way, it is known  that effective communication can minimize failure in software projects.    

Finally, from a pdf which is your experiment plan(See experiment plan below) to mobile applications
to collect data, or even some R language code to perform statistical analysis, when your experiment
model is complete you'll be able to generate any artifact related to your experiment. 
Thus, ESEML is intended to ease the burden in the process of experimenting, from 
planning through collecting and to analyzing data, those can be achieved through simple 
transformations of the domain model instance. We believe that these transformations might
improve productivity of experimentation in software engineering.
   
\section{Conclusion}
In the present work we have reviewed the concepts of Experimental Software
Engineering focused on Controlled experiments. Additionally, we have addressed
challenges found throughout different phases of an experiment, from an initial
experiment plan to final study validations. Issues found throughout the process
of experimenting were listed in order to present the complexity of the domain
and confirm the need of better tools to realize experimentation.

Domain-specific languages were defined and presented according to the needs
aforementioned. Also, statements from relevant researchers in the topic based
the conception of our tool. The process described by Martin Fowler was followed
in the conception of the ESEML Workbench, domain model and visual representation
of the DSL were defined. Finally, a case of an automatic generation of the
Experiment plan, which was the starting point for the present study, was
presented.

Problems during the development of the DSL, added to the limitations regarding
to time and scope constraints were mentioned. Finally, the potential of the
proposed DSL was emphasized through the exemplification of what can be done in
code generation applied in model transformation, for the activities involved in
a controlled experiment. The possibilities are presented as our future work in
the next section.

\section{Future Work}
For future work, as previously mentioned, our DSL Workbench is intended to 
generate any artifacts necessary to conduct a controlled experiment, including 
software to collect data effectively from the experimental units. Limited t
ransformations and validators were implemented in our DSL due to scope and time
limitations. Those are intended to be included in the next release.

Additionally, defining an acceptable domain model from our preliminary review
took most of our time which led us to the conception of a pre-phase for our
work: We found out that a Systematic Review of Studies proposing a formal model
for controlled experiments became necessary to proceed with our domain model.
Thus, we are now intended to start a Systematic Review of Studies to provide the
deeper rationale that is needed.

Furthermore, we want to obtain deeper understanding on how to automate
validation of formalized hypothesis and identify confounders within these. Thus,
through systematization, our DSL will try to minimize bias in the controlled
experiments using our tool or, in other cases, provide visual cues so domain
experts can fix the formalized hypothesis manually.

\bibliographystyle{abbrv}
\bibliography{eseml}

\end{document}