\documentclass[a4paper,11pt]{report}
\usepackage{graphicx}

%\usepackage[pdftex]{color}
%\usepackage[colorlinks]{hyperref}
%\definecolor{grey}{rgb}{.1, .1, .1}
%\hypersetup{
    %a4paper,
    %pdftitle={Process-based Qualitative System Identification},   
    %pdfsubject={Qualitative System Identification}, 
    %pdfauthor={Hylke Buisman},  
    %pdfkeywords={Qualitative reasoning, modeling, System identification},
    %plainpages=false,
    %urlcolor=grey,  
    %linkcolor=grey,
    %citecolor=grey, 
    %bookmarksnumbered
%}



\title{\textbf{Automated modeling in}\\
\textbf{process-based qualitative reasoning}}
\author{
	\\ \begin{tabular}[t]{c} 
		Hylke Buisman\\
		\small{Student number: 0418846}\\
		\small{hbuisman@gmail.com}
 		\end{tabular}\\
		\\
	\ \\
	\ \\
	\ \\
	\ \\
	In partial fulfillment of the requirements for:\\
	BSc. Artificial Intelligence\\
	University of Amsterdam (UvA)\\
	The Netherlands\\
\ \\
\ \\
\\ \begin{tabular}[t]{c} 
\underline{Supervision}\\\ \\
		Bert Bredeweg and Jochem Liem\\
		\small{Human Computer Studies Laboratory} \\
		\small{Informatics Institute} \\
		\small{University of Amsterdam}\\
 		\end{tabular}\\
\ \\
\ \\}

\begin{document}




\maketitle

\setlength{\parindent}{0pt}
\setlength{\parskip}{0.3em}

\thispagestyle{empty}
\newpage
\thispagestyle{empty}
\ \\
\newpage

\begin{abstract}
Qualitative reasoning (QR) has proved itself useful in applications
involving knowledge transfer and acquisition. Building models for these
applications is, however, a difficult and lengthy task for the domain
experts involved. To relieve the strain placed on the experts and speed up
the modeling process, an automated modeling algorithm is desirable.

This thesis describes the issues related to automated modeling in QR and
introduces a preliminary algorithm. The algorithm particularly focuses on
capturing cause-effect reasoning, which is essential for understanding
and explaining the behavior of systems. By analyzing several
well-established models and their behavior, the algorithm is designed by
relating the theory underlying qualitative reasoning to possible output
behavior. The algorithm is successfully applied to several systems ranging
from simple to fairly complex, and concrete improvements are proposed that
would enable the algorithm to handle even more complex systems.
Consequently, the result of this thesis is a thorough description of the
challenges and prospects of this previously largely unexplored area, but
more importantly it introduces a preliminary algorithm with promising
results.
\end{abstract}

\newpage
\thispagestyle{empty}
\ \\
\newpage


\setcounter{tocdepth}{1}
\tableofcontents
\thispagestyle{empty}

\chapter{Introduction}

The use of qualitative reasoning (QR) has proved its worth in real-world applications
such as the automotive industry \cite{struss2003mbs} and automated generation
of control software for photocopiers \cite{sakuo1997mba}. 
These kinds of applications mainly focus
on control and engineering problems. Other applications focus on capturing
and using conceptual knowledge.

A good example of the latter, are applications where QR is used to gain a better understanding of a system
or to facilitate knowledge acquisition and transfer. The positive effect of using qualitative models in these situations 
is confirmed by prior research such as \cite{bredeweg2004qme}. Similarly, qualitative models can be applied 
in domains such as ecology, by using qualitative reasoning to test hypotheses and thus explore the field (See \cite{salles2006agq} for an example).

In all these cases it is necessary for domain experts to make their conceptual knowledge explicit in 
QR models. However, domain experts may not be familiar with the techniques
that are required to make conceptual models. Even if the representation is known, modeling is a difficult task.
Consequently, an algorithm that can build such a model with minimal intervention of the expert 
would greatly relieve the strain placed on these experts and speed up the modeling process.
The goal of this thesis is to address this issue by presenting an algorithm 
that can build a qualitative model given a description of the system's behavior.


\section{Related work}
The problem of automatically generating a model representing a system's behavior is not entirely new.
The field of System Identification \cite{LjungSID} aims to build dynamical 
models from measured data. However, this approach is mainly mathematical, 
producing ordinary differential equations (ODEs) that underlie
the measured data. Although these ODEs in some sense explain the data, 
they do not explicitly represent the cause-effect relations in the system. 
In addition, these methods don't easily handle incomplete or noisy data. 

Already one step closer to solving the problem is a more qualitative approach
to system identification (e.g. \cite{say1996qsi,hau94learning}). By abstracting from quantitative data to qualitative data, 
a more intuitive interpretation of the data can be made. 
These models use QR to find Qualitative Differential Equations (QDEs)
to explain the behavior. However, these models are generally based on 
Kuipers' constraint based approach to QR \cite{kuipers1985lqs}, which is still rather mathematically inclined and does not allow for an intuitive representation of causality. 

The area of process-based qualitative reasoning \cite{forbus1984qpt} offers better prospects of 
modeling causal relations in an intuitive manner. This field was furthered
by the contributions of GARP3 \cite{bredeweg2006gnw}. This workbench for Qualitative
Reasoning and Modeling provides a means to build and simulate QR models. 
The work presented in this thesis is especially situated in the context of the work of
process-based qualitative reasoning and GARP3.


\section{Goals}
Although GARP3 provides insightful methods for model building \cite{Bredeweg2007}, it does not support \emph{automated} model building, nor does any other related work. 
When combining this with the motivations discussed earlier, it becomes clear that a novel approach is required that can derive an explanation for a given behavior while also making the system's causal relations explicit. 
This thesis presents a preliminary algorithm for automated modeling\footnote{The term \emph{automated modeling} is used
instead of \emph{system identification}, to emphasize that this approach is not mathematically
inclined.} in process-based qualitative reasoning and provides a study of the issues that arise in this pursuit. 
The evaluation of the algorithm is based on comparing the output models with the correct models. Additionally,
the models are evaluated by comparing their simulation results with the initial input behavior.


\section{Overview}
The remainder of this thesis is built up as follows. First, chapter \ref{chapt:background_theory}
introduces background theory, elaborating on \emph{qualitative reasoning} and 
\emph{automated model building}. Next, chapter \ref{chapt:preliminaries} discusses 
theoretical concepts used in and information about the algorithm. The algorithm itself 
is explained and illustrated in chapter \ref{chapt:algorithm}. 
Finally, chapter \ref{chapt:discussion} ends with a conclusion and discussion and proposes future work.


\chapter{Background theory}
\label{chapt:background_theory}

This chapter discusses the background theory. 
Since the principles of the algorithm and the representation of its output strongly build on the 
qualitative reasoning paradigm, the following section first gives an outline of qualitative reasoning. Section \ref{sect:automated_modeling} elaborates on automated modeling.

\section{Qualitative reasoning}


\begin{quotation}
\noindent

\textit{Ford tumbled through the open air in a cloud of glass splinters and chair parts. 
Again, he hadn't really thought things through, really, and was just playing it by ear, buying time. 
At times of major crisis he found it was often quite helpful to have his life flash before his eyes. 
It gave him a chance to reflect on things, see things in some sort of perspective, and it sometimes furnished him 
with a vital clue as to what to do next. There was the ground rushing up to meet him at 30 feet per second per second, 
but he would, he thought, deal with that problem when he got to it. First things first.
}
\begin{flushright}
-- Hitchhiker's Guide to the galaxy: Mostly Harmless
\end{flushright}
\end{quotation}

Normally one would not consider jumping from the twenty-third floor in any situation. Years of experience
have taught us that things that go up, must come down. Seemingly without effort we observe 
such situations and predict the resulting behavior based on our world knowledge. However, when studied more closely
this skill is more intricate than we experience it. Since birth we
have been observing the objects and processes around us, and have learned how they 
interact. From these observations we form a notion of which objects and processes `belong together'. 
That is, we cluster the behavior of the world around us in systems and construct mental models of them. 
With these models we can reason about cause and effect. It can be argued that this skill is a major part of human understanding, 
and plays an important role in how we can interact with our environment by being able to predict what will happen.

Qualitative reasoning is the paradigm that attempts to artificially replicate this process of 
reasoning with qualitative models.

\section{Origins of QR}

Qualitative reasoning can roughly be divided into two streams. The first is called `Naive physics', 
which was introduced by Hayes \cite{hayes1985snp}. He suggested that
a large formal framework should be set up, that describes all commonsense reasoning knowledge in the physics domain. 
The other stream, mostly referred to as qualitative physics, originated
from the work of De Kleer \cite{dekleer1975qaq}. His approach attempts
to use qualitative techniques to solve specific engineering problems. Common for both
streams, and thus characteristic for qualitative reasoning, is that they attempt
to understand and model commonsense knowledge about the world around us. It is qualitative
in the sense that discrete values are used. This has the advantage that it reduces the complexity
and is more intuitive. 

QR can be seen as an area of AI research that follows these initial ideas. De Kleer and Brown \cite{dekleer1984qpc} 
proposed a qualitative reasoning and modeling approach that is 
centered on \emph{components} in which behavior is described with qualitative differential equations called \emph{confluences}.
Kuipers introduced his constraint-based approach \cite{kuipers1985lqs}, which was widely used, 
partially owing to its close relation with ODE's and the fact that he made the software (QSIM) available.
Forbus proposed Qualitative Process Theory (QPT), 
a process-based framework \cite{forbus1984qpt}. In this framework
the world is represented with \emph{objects} which have certain \emph{quantities}. 
Bredeweg et al. developed GARP3 \cite{bredeweg2006gnw} which is an integration and extension of these early 
approaches to QR. It is this integrated perspective that is followed in this thesis.


\section{Qualitative reasoning essentials}
\label{sect:qr_essentials}
In QR, objects and quantities play an important role. An object or \emph{entity}
can be just about anything, however, according to QPT it is best
to let entities resemble what we perceive as entities in the real world. \emph{Quantities} represent the parameters
of these entities. Whilst entities can only come into existence or cease to exist, quantities
can gradually change. A quantity $Q$ is built up of several components:

\begin{itemize}
\item $A_m(Q)$ - magnitude of the amount
\item $A_s(Q)$ - sign of the amount
\item $D_m(Q)$ - magnitude of the derivative
\item $D_s(Q)$ - sign of the derivative
\end{itemize}


Both the amount and the derivative of a quantity can take different values. 
A \emph{quantity space} indicates which set of interval and point values are allowed for a specific derivative or amount.
The most simple example is the \emph{mzp} quantity space: [$min, point(zero), plus$], where min and plus are intervals
and zero is a point. Although other quantity spaces are allowed for the derivative according to QPT,
GARP3 assumes that it may only have values from the \emph{mzp} quantity space. This choice is made, since
knowing if a quantity is decreasing, stable or increasing is sufficient information for most purposes.

\subsection{Causal dependencies}
\label{sect:causal_dependencies}

When a process (such as heating or melting) takes place, quantities change over time.
Changes are represented using \emph{direct} and \emph{indirect influences}. A direct influence is 
represented as follows:

\[Q_1 \stackrel{I+}{\rightarrow} Q_2\]

In this case a positive influence is shown, but a negative influence ($I-$) can also be used.
When a quantity is directly influenced, such as above, its derivative $D_m(Q_2)$ is
the sum of the direct influences; an $I+$ adds to the derivative and an $I-$ is subtracts.
Since derivatives have an \emph{mzp} quantity space, one could say that
$D_m(Q_2) = plus$ when $A_s(Q_1) = +$ and inversely when a negative influence is used. 
When opposing direct influences interact the result depends on the relation between the 
amounts of the influencing quantities. For example, in a situation where an interaction such as

\[Q_1 \stackrel{I+}{\rightarrow} Q_3 \hbox{ and } Q_2 \stackrel{I-}{\rightarrow} Q_3\]

holds, $D_m(Q_3) = plus$ when $A_m(Q_1) > A_m(Q_2)$ (assuming positive amounts).

The second type of influence is the indirect influence. 
Indirect influences, also called proportionalities, are represented as follows:

\[Q_1 \stackrel{P+}{\rightarrow} Q_2\]

Again, a negative proportionality is also possible. An indirect influence propagates the effect 
of a direct influence to other quantities. The derivative 
of an indirectly influenced quantity is equal to the sum of the derivatives of its indirectly influencing 
quantities. Owing to the choice of quantity space for derivatives this can be interpreted
such, that $D_m(Q_2) = D_m(Q_1)$ when $P+$ holds and $D_m(Q_2) = -D_m(Q_1)$ when $P-$ holds. 
Similar as described for direct influences, this only holds in case there are no interacting opposing indirect influences.
To avoid confusion, we will call indirect influences \emph{proportionalities} and direct influences \emph{influences}.

\subsection{Other dependencies}
In addition to causal dependencies there are several other types of
dependencies that are of importance. These dependencies can be divided into
four categories: calculi, (in)equalities, correspondences and value assignments.

\begin{itemize}
\item
Calculi represent that a plus or minus relation holds between
three quantities. For example, $Q_1 = Q_2 - Q_3$ indicates that $A_m(Q_1) = A_m(Q_2) - A_m(Q_3)$.
\item
(In)equalities can indicate three things: that an (in)equality holds between
\begin{itemize}
\item the magnitudes of the amounts of two quantities
\item two values from different quantity spaces
\item the magnitude of a quantity and a value from its quantity space
\end{itemize}

\item
Forbus describes correspondences as follows: ``Correspondences are the means of mapping value information [..]
from one quantity space to another..'' \cite{forbus1984qpt}.
In GARP3 a correspondence can hold between two values or between two quantities. We will only consider the latter.
A correspondence is a one-to-one mapping between the values of two analogous quantity spaces. 
In other words, both quantities always have \emph{the same} value. This does not
mean, however, that they are \emph{equal}\footnote{Two corresponding quantities can have the same qualitative value but still be unequal when the current value is an interval.}. A variant of the correspondence is the inverse correspondence, which maps the values of one quantity space to their inverse in another quantity space. Both correspondences are represented as follows:

\[Q_1 \stackrel{Q}{\leftrightarrow} Q_2 \hbox{ and } Q_1 \stackrel{Q^{-1}}{\leftrightarrow} Q_2\]

Correspondences can be either directed or undirected. A direct correspondence $Q_1 \stackrel{Q}{\rightarrow} Q_2$ states that, if $Q_1$ is known, $Q_2$ will have the same value.
If $Q_1$ is unknown however, but $Q_2$ is, $Q_1$ will not be made equal to $Q_2$ by the
QR reasoner.

\item
Finally, value assignments can be used to indicate that a quantity has a certain value.
A value assignment can be used either to indicate amount values or derivative values.
\end{itemize}

Together with the causal dependencies, the above dependencies form the core of the semantics of a model.
Thus, when building a model (automatically or by hand) one is mainly concerned with finding
the right dependencies as well as their interactions.


\subsection{Model fragments}
Another aspect of QR that is of interest in our pursuit are model fragments. 
They are parts of the model that become active under certain conditions and 
as a consequence introduce dependencies. Take for example a pan of water on a stove. 
At first the water is just heating up, but from the moment the water starts boiling (condition),
the amount of water decreases and the amount of gas increases (consequence). 
Such a description of conditions and consequences would be a typical model fragment 
in a model for fluid dynamics.

There are three types of model fragments in the GARP3 approach:
\begin{itemize}
\item \emph{Static} model fragments are related to what Forbus called \emph{individual views}. They
describe how changes are propagated between quantities.
\item \emph{Process} model fragments are related to Forbus concept of processes. These model fragments
represent how changes are initiated by means of influences.

\item \emph{Agent} model fragments describe influences from outside of the system.
\end{itemize}


\subsection{Reasoning}
Once a complete model has been designed using the above components, a qualitative reasoning
engine can simulate the behavior of the described system. As Figure \ref{fig:QRArchitecture} illustrates, the reasoner takes as input a scenario, initial values, 
assumptions and the model fragments.

\begin{figure}[h!t]
	\centering
		\includegraphics[width=0.80\textwidth]{images/reasoner_architecture.eps}
\caption{\label{fig:QRArchitecture}``The basic architecture of a qualitative reasoning engine'', taken from \cite{Bredeweg2007}}
\end{figure}

The scenario and the initial values together form a description of the initial 
state of the system. Based on a set of transition rules and the assumptions, 
the reasoner evaluates how the system evolves over time. GARP3 outputs
a representation of the changes of the system over time, in the form
of a state-graph (Figure \ref{fig:state_graph}).
For more details on this subject, please refer to \cite{Bredeweg1992}.

\begin{figure}[h!t]
	\centering
		\includegraphics[width=0.80\textwidth]{images/state_graph.eps}
\caption{\label{fig:state_graph} A state-graph representing the output the reasoner.}
\end{figure}


\section{Automated model building}
\label{sect:automated_modeling}

Given this description of the representational framework we can now focus on the task at hand.
That is, determining what model best describes the system producing a given behavior.
In the introduction the relation with the field of (qualitative) system identification was already pointed out. 
The goal of this field is to determine the internal 
workings of a (more or less) black box, of which only the behavior is known. 
Several algorithms for qualitative system identification have been proposed, such as
\cite{say1996qsi} and \cite{hau94learning}.

These and other algorithms all bear a mathematical connotation that fits a
constraint-based approach to QR. However, when it comes to finding models 
containing causal explanations, these algorithms cannot be used.

Consequently, automatically modeling systems in a vocabulary that facilitates cause-effect reasoning 
offers a new challenge. It is also not an easy challenge. First of all there is often not
a lot of data to work with. Additionally, many models are possible for a given behavior and it is initially difficult to constrain this amount of models. One reason for this, is that 
there is an overlap in the output behavior for the different dependencies,
which makes selecting the right dependencies in the right place rather complex.

Given these difficulties, the goal of automated modeling in process-based 
qualitative reasoning is to constrain the set of all models 
to contain only all models that are possible for the given  behavior of the system.


\chapter{Algorithm preliminaries}
\label{chapt:preliminaries}

This chapter outlines the considerations related to the set-up of the algorithm.
This includes a description of the research method, input and output and the introduction
of concepts that were formed during the design of the algorithm.


\section{Method}
The design of the algorithm is based on the workings of the different
dependencies as described in the previous chapter. Using careful analysis
the relations were identified between the semantics of the individual dependencies
and how their presence would exhibit itself in the behavior of the system. During the research
the algorithm was extended step by step, based on this analysis and 
theoretically supported arguments. 

To structure the design, several well-established models were used to explore
different facets of the model building problem. The models were tackled one by one, 
and once the algorithm produced desirable models as output, a more complex model was studied. The models studied, in order of their
complexity, are:

\begin{enumerate}
\item Tree and shade growth (TreeAndShade)
\item Communicating vessels (CommunicatingVessels2)
\item Deforestation
\item Population dynamics (Population January 2007)
\item Heating liquids (Stove2)
\item Rstar (RstarEcologicalModelingJune2006)
\item Ants Garden
\end{enumerate}

All these models, except `Deforestation', can be found on the Qualitative Reasoning \& Modeling portal \footnote{Qualitative Reasoning \& Modelling portal website: http://www.garp3.org}.
`Tree and shade' is considered the least complex, since it contains
only a few quantities and has no conditions or dependency interactions. 
`Communicating vessels' is considered more complex, because it contains a calculus element and inequalities. The `Deforestation' model, is different from the previous
models, in that it contains a lot of clusters linked to each other by propagations.
`Heating liquids' and `Population' are one step complexer, due to the presence of dependency interactions and several model fragments with conditions. 
The final two models are most complex owing to the large amount
of quantities, interactions and conditions. Of these models, the first five were used most intensively. 
No models were kept aside for evaluation, since the algorithm's strengths and weaknesses can best be judged 
from a theoretical rather than an empirical point of view. 
This is a consequence of the fact that the design choices in the algorithm are based on logical 
argumentation.

\section{Input and output}
Recall, that the algorithm's goal is to derive an explanation for a given behavior, and thus 
explaining the dynamics of the underlying system. This section will make this statement more specific by
describing which data are used as input, and what the algorithm will output.

 
\subsection{Algorithm input}
The most important input to the algorithm is a description of the system's behavior in the form
of a state-graph, as GARP3 outputs it after simulation (See Figure \ref{fig:state_graph}).
The state-graph represents the changes in the system that occurred during simulation.
Branchings in the state-graph indicate that several different possible changes were encountered. Each state in the state-graph provides information about which quantities
are present in the system and what their amounts and derivatives are. 
Furthermore it contains information of all observable (in)equalities (those (in)equalities that GARP3 displays in the (in)equality history).

In addition to a behavioral description of the system, a rough structural description of the system is required. 
This is input in the form of a scenario, which is the GARP3 representation
of the initial situation of the system. It describes which entities are included in the system, and what (some of) their quantities are. Additionally, it describes how the entities
are related to each other in terms of structural relations (not to be confused with
the causal and other dependencies). This input is closely related to the behavioral input,
since it is very difficult to give a description of a behavior, if no structural description is given.

Furthermore a description of the entity type hierarchy is required, which describes the is-a relations between the different entities.

To summarize, the following data is supplied as input to the algorithm:

\subsubsection*{State-graph}
\begin{itemize}
\item States
\begin{itemize}
\item Quantity amounts
\item Quantity derivatives
\item Quantity spaces
\item Observable (in)equalities
\end{itemize}
\item State transitions
\end{itemize}

\subsubsection*{Scenario}
\begin{itemize}
\item Description of which entities are involved
\item Partial information about the structural relations between entities
\end{itemize}

\subsubsection*{ISA-hierarchy}
\begin{itemize}
\item Full description of the entity type hierarchy.
\end{itemize}


\subsection{Algorithm output}

The algorithm (ultimately) outputs models which are ready for simulation; these are produced
in the format of GARP3.
After determining which dependencies should be placed between which quantities, 
the algorithm will output one or more models: it returns all models
that can explain (are consistent with) the input. 


\section{Assumptions}

In addition to restrictions placed on the input, the algorithm makes several assumptions.
Since there is virtually no prior work on the design of this kind of algorithm, these
assumptions help to scope the creation of the algorithm. It is only after the initial explorations have been 
successfully completed, that research can be done to see how these assumptions can be alleviated.

The following assumptions are made:

\begin{itemize}
\item All possible behaviors in the form of a state-graph are provided to the algorithm,
that is a \emph{full envisionment} of the system's behavior is input.
\item The input behavior does not contain any noise.
\item At each state in the state-graph there are no unknown derivatives or amounts.
\end{itemize}


\section{Causality}


Recall, that the GARP3 approach to QR is followed due to its ability to represent
cause-effect relations. Due to their significance in the algorithm, they will be discussed in
more detail.

\subsection{Causal paths}

Causal paths are a succession of quantities connected by influences and proportionalities.
A causal path always starts with an influence, followed by any number of proportionalities,
for example:

\[Q_1 \stackrel{I+}{\rightarrow} Q_2 \stackrel{P+}{\rightarrow}\ldots \stackrel{P-}{\rightarrow} 
Q_{n-1} \stackrel{P+}{\rightarrow} Q_n\]

A causal path ends at a node that has no proportionalities leading out of it. 
If a quantity in a causal path has more than one proportionalities leading
out of it, multiple different causal paths can be identified. The advantage of 
considering causal paths, is that it restricts the possible orders and directions in which
causal dependencies can appear in a model. For example, consider the following situation:


\[Q_1 \stackrel{I+}{\rightarrow} Q_2 \stackrel{I+}{\rightarrow} 
	Q_3 \stackrel{P-}{\rightarrow} Q_4 \stackrel{I+}{\rightarrow} Q_5\]

If we assume no dependency interactions, the above is very unlikely, when examining it from the perspective of causal paths. The given combination of
dependencies would imply a large amount of very short causal paths. 
By centering the search for good models around causal paths, the search becomes
more restricted, while also staying focused on patterns that are common in 
the real world.


\subsection{Determining direction of causality}
\label{sect:causality_direction}

Determining the direction of causality is a common problem: if events A and B co-occur,
how do we know if A caused B or vice versa? Assuming, of course, that there
is a causal relation between the two.

If a state in the state-graph shows at some point, that $D_m(Q_1) = D_m(Q_2)$,
this is (as Section \ref{sect:causal_dependencies} shows) an indication that:

\[Q_1 \stackrel{P+}{\rightarrow} Q_2\]

might hold. However, this is just as well and indication that:

\[Q_2 \stackrel{P+}{\rightarrow} Q_1\]

holds. The reason for this is that the semantics ($D_m(Q_1) = D_m(Q_2)$) of a proportionality
are symmetric. In short, it is not possible to determine the direction of causality at this point, and thus ambiguous results regarding the direction of causality are to be expected from any algorithm for automated model building.

\section{Clusters}
\label{sect:clusters}
Causal paths are generally long, with a lot of branching points, and are as a result difficult to study. 
It turns out to be useful to analyze causal paths within certain limits, these limits are defined by \emph{clusters}.
In this context a cluster can roughly be described as a group of quantities that exhibit `equivalent' behavior.
More specifically, in a given model a set of quantities are in the same cluster if their values either correspond or inversely correspond. 
In addition, their derivatives should be the same, or be each others inverse in case of an inverse correspondence. The equal or inverse derivatives imply a positive or negative proportionality between the two.

Since we want clusters to contain completely equivalent
quantities, two quantities in the same cluster are also not allowed to be unequal at any
point in the state-graph. If $Q_1$ and $Q_2$ always correspond, but $Q_1 < Q_2$ holds 
at some point (when both are in an interval), both quantities cannot be in the same cluster.

Through trial and error, clusters turned out not to be very meaningful when quantities
within the cluster belong to different entities. This is mainly because making clusters
that span more than one entity would conflict with the natural borders we observe between
different entities. For this reason, clusters may only contain quantities
that belong to the same entity.

To illustrate the idea of clusters, consider Figure \ref{fig:tree_shade_cluster}. In this image one can see
a condensed form of the `Tree and Shade' model. The tree's size and shade have 
corresponding values and have the same derivative since they are connected with a proportionality. As a result the quantities
\emph{size} and \emph{shade} are in the same cluster. The quantity \emph{growth rate} does not fall
in the same cluster, since its value does not correspond with the other two.

\begin{figure}[h!t]
	\centering
		\includegraphics[width=0.70\textwidth]{images/tree_shade_cluster.eps}
	\caption{Clusters in the Tree \& Shade model}
	\label{fig:tree_shade_cluster}
\end{figure}


A simple consequence of this definition is that a quantity cannot be member of more
than one cluster. If $Q_1$ and $Q_2$ are in a cluster, and $Q_1$ and $Q_3$ are also in a cluster,
then $Q_1$, $Q_2$ and $Q_3$ have to be in the same cluster, or none at all. After all,
if $Q_1$ and $Q_2$ have equivalent behavior and $Q_1$ and $Q_3$ too, then $Q_2$ and $Q_3$ should
have equivalent behavior due to the transitivity of equivalence. It seems
strange that this could occur at all. The reason that this happens is that some
identified correspondences are discarded, based on the presence of inequalities
between the seemingly corresponding quantities.

\section{Minimal covering}
\label{sect:minimal_covering}

The entire design of the algorithm is set up in such a way, that each step focuses on
finding a minimal covering of the given input. In other words, a minimal set of dependencies
is sought that can explain (cover) the input behavior, and an output model should not contain
redundant dependencies.

However, at several points in the algorithm ambiguities arise, for example due to the impossibility
to determine the direction of causality. As a consequence multiple sets of dependencies
are possible, and each set can be substituted for another. For example, in a final model
for `Tree and Shade', $Size \stackrel{P+}{\rightarrow} Shade$ could be substituted with 
$Shade \stackrel{P+}{\rightarrow} Size$. 

Along these lines we can make a distinction between \emph{substitutionary} and \emph{complementary} sets of dependencies. When a set of dependencies is a substitute for another group, it forms an alternative for the other 
set of dependencies. In a single model, two sets of dependencies that are each others a substitutes can never co-occur.
On the other hand, when two sets of dependencies complement each other, both explain different aspects of the behavior, 
and they will both have to be present to explain the data.

Combining the different types of dependencies into a model can therefore be seen as a conjunction of sets that are complementary. 
The individual conjuncts are themselves substitutionary groups of dependencies.

\[(D_1 \oplus \ldots \oplus D_i) \wedge \ldots  \wedge (D_j \oplus \ldots \oplus D_m) \hbox{; } D_1 = \{d_1, d_2, \ldots, d_k\} \]

As indicated above, $D_n$ is set of one or more dependencies. Furthermore, $\oplus$ is the XOR operator. Thus, a final model will contain exactly one of $\{D_1, \ldots, D_i\}$, exactly one of $\{D_j, \ldots, D_m\}$, etc.

In this way the algorithm builds a model by determining the substitutionary group at each point, picks one and combines it with one from each other substitutionary group. A concrete example
is the fact that each cluster (Section \ref{sect:clusters}) within the model can have different possible internal causal orderings,
and the relations of every cluster should be represented in a final model. 
In this case the different internal orderings are substitutionary groups, and the representation of the different clusters are complementary groups.

Clusters can in general be related to both substitutionary and complementary groups
in that dependencies in both groups comply with the boundaries of clusters.
In other words, dependencies in a substitutionary or complementary group
always lie between quantities in the same cluster.


\chapter{Algorithm}
\label{chapt:algorithm}


This chapter discusses each step of the automated modeling algorithm. For each step the issues are described and examples are given to clarify where needed.

\section{Algorithm outline}

The nature of the modeling problem lends itself well for a 
backtracking programming language, 
hence, the SWI-Prolog \cite{Wielemaker:03b} language was used. 
Keeping in mind that a backtracking approach was used, may facilitate the 
understanding of the following sections. The algorithm generates exactly
one of the possible models per run. On backtracking the algorithm will
return the other models. In addition a simple function is available
that will return all possible models at once.

The following steps are involved in the automated model building process:

\begin{itemize}
\item Find naive dependencies using consistency rules
\item Find clusters based on correspondences and proportionalities
\item Find internal ordering of clusters
\item Identify cluster actuations
\item Link remaining clusters with propagations
\item Initialize magnitudes
\item Prune unnecessary correspondences
\end{itemize}

All the above steps are executed in sequential order in the algorithm.
The following sections will describe how the algorithm handles these
steps, and also presents possible improvements were applicable.
Each step indicates what the goals, input and output of that step are.
Note that every step takes the GARP3 scenario and the state-graph
as input, whether it is mentioned explicitly or not.

\section{Finding naive dependencies}

\begin{quotation}
\noindent
\textbf{Goal:} \textit{Finding dependencies that provide scaffolding for the rest of the algorithm.}\\
\textbf{Input:} \textit{State-graph of the system behavior, the system's scenario and a set of consistency rules.}\\
\textbf{Output:} \textit{A set of dependencies that are consistent with the entire state-graph.}
\end{quotation}


In this step, dependencies are sought that do not involve conditions or interactions. 
Not taking into account conditions and interactions at this point, has the advantage that the effect of the dependencies we are looking for is noticeable in the entire state-graph.
In other words, such a dependency is \emph{consistent} with all states in the state-graph. Additionally, only binary dependencies are taken into account (thus excluding calculi). 
We call these dependencies \emph{naive dependencies}.

The first step in the algorithm is important for the rest of the algorithm, 
in that it provides the scaffolding for the subsequent steps. All following
steps will use the set of naive dependencies as a source to pick dependencies
from to add to the model that is being built. In a few specific cases, steps may also
add dependencies which are not part of the naive dependencies, such as
value assignments or (in)equalities.

\subsection{Consistency rules}

To identify naive dependencies, consistency rules are used. All quantity pairs in the system
are passed through these consistency rules, to enumerate
which naive dependencies could hold between these quantity pairs.
The rules consider the following information from a given state, to check if a dependency 
between quantities $Q_1$ and $Q_2$ is consistent:
\begin{itemize}
	\item Amounts $A_m(Q_1)$ and $A_m(Q_2)$ and their signs
	\item Derivative $D_m(Q_1)$ and $D_m(Q_2)$ and their signs
	\item Inequalities, for example $Q_1 > Q_2$
\end{itemize}

This information will from now on be referred to as the \emph{state information} of a quantity. The consistency rules that are used have been derived from the semantics of 
the different dependencies (See Section \ref{sect:qr_essentials}).
To given an idea of what these rules look like, a few of them are listed here:

\begin{itemize}
	\item $Q_1 \stackrel{I_+}{\rightarrow} Q_2$ if $A_s(Q_1) = D_s(Q_2)$ or 
	\item $Q_1 \stackrel{I_-}{\rightarrow} Q_2$ if $A_s(Q_1) = -D_s(Q_2)$
	\item $Q_1 \stackrel{P_+}{\rightarrow} Q_2$ if $D_s(Q_1) = D_s(Q_2)$
	\item $Q_1 \stackrel{P_-}{\rightarrow} Q_2$ if $D_s(Q_1) = -D_s(Q_2)$
	\item $Q_1 \stackrel{Q}{\rightarrow} Q_2$ if $Q_1$ an $Q_2$ have the same value, and their quantity spaces are analogous\footnote{Two quantity spaces are analogous, if 
	they can be aligned in such a way that each point in $QS_1$ aligns with a 
	point in $QS_2$ and each interval in $QS_1$ aligns with an interval in $QS_2$, and vice versa.}.
\end{itemize}

The set of consistency rules is applied to all states in the state-graph, 
and dependencies are only stored when they are consistent with all states. 
This results in a list of naive influences, (inverse) correspondences and proportionalities.

\subsection{Naive Q-correspondences}

The set of consistency rules only contains rules for undirected
Q-correspondence. The reason is that directed Q-correspondences can not
be distinguished from undirected Q-correspondences given the assumption that
all amounts and derivatives in the state-graph are known.
This follows from that fact that the only difference between the two types occurs 
when one of the two amounts is undefined and the other is defined.
In the undirected case both values can be inferred from the other one, 
but in the directed case one can only be inferred from the other.
Thus, since we assume that all values are defined, we can not distinguish the two.


\subsection{Redundancy}
The set of naive dependencies gives a coarse description of the causal
relations in the system, but contains a lot of redundancy. This 
redundancy is the source of the \emph{substitutionary groups} (Section \ref{sect:minimal_covering}). 
A realistic example of this redundancy, can be seen in Figure \ref{fig:redundancy}.

\begin{figure}[ht]
	\centering
		\includegraphics[width=0.8\textwidth]{images/redundancy.eps}
	\caption{Redundancy among naive dependencies}
	\label{fig:redundancy}
\end{figure}

In this figure all blue colored dependencies will be returned by the
consistency search when the naive dependency search is ran on the `Communicating vessels'
model. It returns a proportionality for each causal direction, due to
the ambiguity of causal direction, and the two influences which
are considered to be the correct dependencies are also returned. However,
the observed system behavior could just as well be explained by, for
example, $Flow \stackrel{I_-}{\rightarrow} Amount (left)$ and
$Amount (left) \stackrel{P_-}{\rightarrow} Amount (right)$. 
In search for a minimal covering, not all four dependencies
should be included, to avoid redundancy. 
These two sets (and other subsets of dependencies) are thus 
substitutionary groups of each other. 

The task for the remainder of the algorithm is to select the correct 
substitutionary groups, and use the selected naive dependencies to derive more 
complex dependencies.



\section{Determining clusters}
\begin{quotation}
\noindent
\textbf{Goal:} \textit{Determine clusters of quantities to structure the search space.}\\
\textbf{Input:} \textit{The set of naive dependencies.}\\
\textbf{Output:} \textit{Sets of quantities (clusters) that belong to the same entity
and have equivalent behavior.}
\end{quotation}

Clusters (Section \ref{sect:clusters}) are an important tool for structuring the search for models. Recall that two quantities are in the same cluster when they can be connected
by a quantity correspondence and a proportionality, given that they belong to the same entity. If such a pair is found, the algorithm tries to expand the cluster by adding other quantities.
Quantities are only added if they contain correspondences to \emph{all} quantities already
contained in the cluster. If such quantities cannot be found, the algorithm continues
searching for other clusters. 

Once this process is finished, all candidate clusters pass through a cluster validity
check, to see if there is any overlap in the clusters. In other words: it looks
for quantities that are in more than one clusters, which is illegal (Section \ref{sect:clusters}). Currently all clusters that are found to overlap, are removed.
Another option would be to only remove as much clusters until the overlap is removed 
(keeping one of the previously overlapping clusters). This was not done, since no situations were encountered where this is desirable.

As an illustration, Figure \ref{fig:cluster_alg} shows the clusters that are found in the `Communicating Vessels' model. The clusters reflect that the quantities Amount, Height and Pressure behave in the same way. 

\begin{figure}[ht]
	\centering
		\includegraphics[width=0.9\textwidth]{images/cluster_alg.eps}
	\caption{Clusters found in Communicating Vessels model}
	\label{fig:cluster_alg}
\end{figure}

The image also shows that Flow is not found as a cluster, simply because the naive dependencies
do not contain a correspondence involving Flow. However, since the remainder of the algorithm
uses clusters as building blocks for the model, Flow will have to be considered as a cluster.
Thus, at this point all quantities that are not included in a previously found cluster,
become singleton clusters.

\section{Generating causal paths}
\label{sect:alg_causal_paths}

\begin{quotation}
\noindent
\textbf{Goal:} \textit{Selecting possible causal orderings within clusters.}\\
\textbf{Input:} \textit{A set of clusters and the set of naive dependencies.}\\
\textbf{Output:} \textit{For each cluster a possible valid causal ordering is returned;
backtracking returns other possible orderings.}
\end{quotation}



Now that the clusters have been identified, the causal ordering of the quantities
within these clusters need to be sought. The simplest possibility is
to enumerate all possible ways that a given amount of quantities can be enumerated.
Using this approach, the quantities can be connected with proportionalities
in a linear fashion, or using branching within the cluster. A cluster
of three quantities could for example be ordered as

\[Q_1 \stackrel{P+}{\rightarrow} Q_2 \stackrel{P+}{\rightarrow} Q_3\]

or, with branching as:

\[Q_1 \stackrel{P+}{\rightarrow} Q_2 \hbox{ and } Q_1 \stackrel{P+}{\rightarrow} Q_3\]

It is clear that this method will lead to a huge amount of different causal orderings, 
especially when the amount of quantities increases. To constrain the algorithm only
considers linear orderings within clusters. Although this choice
will still ensure that all output is consistent with the data, it may
prevent the algorithm from outputting the desired model (i.e. when branching 
gives the best conceptual explanation). 
The algorithm only considers linear ordering, as branching does not often occur
in practice. Additionally, the reduction of possible models offers a significant 
advantage.

The order of causality within a cluster is further guided by the fact that quantities 
belonging to entities of the same type, should have the same causal ordering.
It is not possible that clusters belonging to the same entity have a different
causal direction for different instances of that entity. 
To illustrate this, consider two contained liquids in a model.
Imaging that in the first contained liquid, changes to the amount of water 
propagate to height, and in the second changes in height propagate to 
the amount. Such situations are in conflict with our sense
of consistency in the world. These possibilities can thus
be excluded since they are highly unlikely, if not impossible.
As a result, all models are discarded where clusters belonging
to the same entity have different causal orderings.


\section{Actuating clusters}
\begin{quotation}
\noindent
\textbf{Goal:} \textit{Connecting part of the clusters, by identifying cluster actuations.}\\
\textbf{Input:} \textit{A set of clusters with internal causal orderings and the naive dependencies.}\\
\textbf{Output:} \textit{A set of actuations (using influences) that connect part of the clusters
and explain the source of change in the system.}
\end{quotation}



Clusters have no use standing by themselves; a system is, by definition a set of things working \emph{together}. Therefore, the clusters, have to be connected to each other. 
This can be done in several ways: a cluster can be actuated
by another, or can act as actuator itself. In addition, clusters can be connected
by propagating an actuation. In a complete model, every cluster should take part in at 
least one of these relations, otherwise the whole is not a system but multiple
separate systems, and the input is wrong to start with. 
This step identifies cluster actuations.

When one cluster actuates another, there
is an influence relation between the two. Actuations are the most
important form of connecting clusters, since these connections are the
cause of change in the system. In addition, since they are based on
influences, they are the most restricting connection: this is a result of
the specific way influences manifest themselves in the output behavior.
Due to this specific appearance in the state-graph the amount of
quantity pairs that is consistent with that pattern, is a lot less than
the amount consistent with proportionalities.
It is mainly due to this restricting power, that actuations are identified first.

We distinguish two different types of actuations. 
The first is by means of an \emph{equilibrium seeking mechanism}
and the second is by means of an \emph{external actuator}.

\subsection{Equilibrium seeking mechanisms}
The algorithm first tries to find equilibrium seeking mechanisms (ESM)
within the set of naive dependencies. ESMs are better known as `flows'.
If some quantity on one side of the flow is higher than on
the other, a flow causes the quantities to equalize. A concrete
example can be found in the `Communicating Vessels' model (Figure \ref{fig:flow_example}).
Since flows and other ESMs are common in qualitative models,
they are important to distinguish.


\begin{figure}[ht]
	\centering
		\includegraphics[width=1.0\textwidth]{images/flow_example.eps}
	\caption{The flow in the Communicating Vessels model}
	\label{fig:flow_example}
\end{figure}


An ESM is identified by the presence of a min-calculus.
More formal: an ESM holds between the clusters X, Y and Z if 

\[Q_1 = Q_2 - Q_3 \hbox{, where } Q_1 \in X, Q_2 \in Y, Q_3 \in Z\]

is consistent with the state information of these quantities, and the naive 
dependencies contain:

\[Q_4 \stackrel{I-}{\rightarrow} Q_5 \hbox{ and } Q_4 \stackrel{I+}{\rightarrow} Q_6
	\hbox{, where } Q_4 \in X, Q_5 \in Y, Q_6 \in Z\]

When these relations hold, clusters Y and Z are actuated by X. Note that in Figure
\ref{fig:flow_example} $Q_1 = Q_4 = Flow$.

This definition requires of the algorithm that it is able to check whether a min-calculus
is consistent for three quantities.

\subsubsection{Finding calculus relations}
Finding calculi is a potentially complex problem, since three quantities
are involved, and checking all triples for consistency with the calculus relations
is computationally expensive: the search space of triples is a larger search space
than the search space of pairs. For this reason, the algorithm first reduces
the set of candidates using four constraints. Note that only min-relations 
are considered, since they indicate an ESM. Similar constraint exist for the 
plus-relation.

\begin{enumerate}
	\item First of all, only those triples are considered in which
all quantities are in a different cluster. A subtraction $Q_1 = Q_2 - Q_3$ is
clearly useless if $Q_1$ and $Q_2$ are in the same cluster, since in this case they
have the same values (due to the correspondence between them).
	\item
The second constraint demands that the set of naive dependencies contains at least
one influence from $Q_1$, the result of the subtraction. The reason for this constraint
choice is that a subtraction would have no use if it does not serve as an actuation.
	\item
Finally, both $Q_2$ and $Q_3$ should be the end of the causal paths 
within their cluster. In most cases this is the most meaningful interpretation.

\end{enumerate}

Although these constraints drastically reduce the amount of candidates, this 
sometimes still leaves us with some false candidates. At this point, two
different constraints can be applied. The GARP3 engine can be called to evaluate
which of these are in effect consistent with the data, or the set could be further 
constrained to only contain triples where $Q_2$ and $Q_3$ are of the same type.
The first option is probably the most reasonable, since it will always
return theoretical correct results. The last constraint is very strong, 
and leaves us with the correct candidates in all (observed cases). This choice may be justifiable by the fact that subtracting quantities of different types is
rather uncommon. However, it cannot be excluded, and further research should
investigate if it is too strict. Nevertheless, the algorithm applies the latter
method due to its ease of use, and the fact that no models have been encountered
where it poses a problem.




\subsection{External actuators}
After having identified all ESMs, the algorithm looks for external actuators.
External actuators are causes of change that, although being part of the system,
lie more at the edges of the system than ESMs. Examples are the growth rate of
a tree, or the deforestation rate in the Deforestation model.

To identify external drives, the algorithm considers those influences from the naive dependencies that are not part of an ESM. For instance, when looking for external
drives in the `Communicating Vessels' model, $Flow \stackrel{I-}{\rightarrow} Amount (left)$
and $Flow \stackrel{I+}{\rightarrow} Amount (right)$ are not considered, since they are 
already part of the Flow ESM.

In selecting external drives, the minimality principle
is once again followed. If a system as a whole can be actuated using one influence or
via propagations of that influence, this is preferred over the use of multiple
external actuations. A direct result is that a cluster will never have more
than one incoming actuation (which would also imply interaction, which is not handled yet). A more indirect result is that an actuating cluster
will also never have more than one influence leading from it. Namely, if 
$Q_1 \stackrel{I+}{\rightarrow} Q_2$ and $Q_1  \stackrel{I+}{\rightarrow} Q_3$ are
in the naive dependency set, $Q_2  \stackrel{P+}{\rightarrow} Q_3$ will also
be a naive dependency (due to the semantics of the dependencies). As a consequence
change in all three quantities can be explained with only one influence
and a proportionality. Figure \ref{fig:redundancy} already highlighted such a situation.

An actuation between two clusters is only considered,
if the naive dependencies contains an influence between all quantities in 
the first and all in the second cluster. Since all quantities in a cluster
are equivalent, there should be an influence between all pairs for one to actuate the other. 
This filters a lot of influences that are part of the naive dependencies, 
which are more or less `by coincidence' consistent with an influence behavior patterns. 

If more than one clusters can actuate the system, they are all considered candidates for external actuators. Backtracking will thus result in models with different external drives. If one influence does not suffice, more are added.

In most cases there are various places where an external drive can influence the system.
Currently, the algorithm attaches the external drive to all possible points on backtracking. 
In the future, this may also be guided by the structure in the system, as described
in the scenario. 

A good example of multiple actuation points is the Deforestation model. 
A search for ESMs did not find any flow structures, so all influences are considered for the external actuation selection. Both the clusters \{deforestation rate\} and \{water reservoir, uses of water\} can,  according to the naive dependencies (after filtering `coincidental' influences) act as an external actuation. Figure \ref{fig:deforestation_actuation} shows the possible actuation connections. Let us consider deforestation rate. 
An actuation from this cluster to any one of the others, and propagations
between the rest would cover all cause of change in the system. However, to which cluster should the influence from `deforestation rate' lead? The scenario states that there
is a structural link between `Woodcutters' (the owner of `deforestation rate') and the `Land' cluster, and it seems reliable to use this as guidance for the selection of the correct influence. On the other hand this heavily relies on the scenario, 
and less on the input behavior. For this reason this has not yet been included, 
but possibilities on this terrain should be investigated. Alternatively, it could
be used as a heuristic, by returning models first that comply with this idea.

\begin{figure}[ht]
	\centering
		\includegraphics[width=1.0\textwidth]{images/actuation.eps}
	\caption{Finding external actuations in the Deforestation model}
	\label{fig:deforestation_actuation}
\end{figure}


Summing up, the algorithm enumerates all possible external drives and selects
as many as are needed to explain the changes in the system. If there
are different drives that can both fully explain the change, backtracking
returns both possibilities (this is another example of substitutionary groups).


\subsection{Feedback}
A common phenomenon occurring in models is feedback.
A feedback in this context is a proportionality leading back from
the end of an actuated causal path to its actuating quantity.
An example can be seen in Figure \ref{fig:feedback}.

\begin{figure}[ht]
	\centering
		\includegraphics[width=0.50\textwidth]{images/feedback.eps}
	\caption{Feedback in the Communicating Vessels model}
	\label{fig:feedback}
\end{figure}

Feedbacks are added if the naive dependencies contain one.
The algorithm always chooses the feedback from the end of the causal path.
Sometimes it may be conceptually better to choose a feedback from halfway the
causal path, but since it is not possible for the algorithm to derive this
from the input, it defaults to the end of the path.
This approach is based on the studied models, but it is unclear
how this generalizes to other models.

\section{Linking clusters by propagation}

\begin{quotation}
\noindent
\textbf{Goal:} \textit{Connecting the clusters that have not yet been connected in the previous step. This is done by means of proportionalities.}\\
\textbf{Input:} \textit{The set of clusters that have not been connected in the previous step and the set of naive dependencies.}\\
\textbf{Output:} \textit{A set of proportionalities that connects remaining clusters. The causal ordering of clusters cannot be determined, and thus backtracking returns all possible orderings.}
\end{quotation}


After having added actuations to the model, some clusters
have been connected, but in most cases not all clusters. Therefore
this step will connect these clusters using proportionalities.

This issue of connecting clusters displays some similarities
with the problem of finding causal paths in a cluster. In which order
will the cluster have to be connected, and is branching
possible, or are only linear orderings allowed? In connecting clusters
the same choices have been made as when finding
causal paths within clusters: only linear orderings are considered for
the same reason this choice is made in finding causal orderings within clusters 
(Section \ref{sect:alg_causal_paths}).

Since interacting proportionalities or influences are not considered yet,
the connections can simply be made by starting at an actuated cluster and from there
linking all unconnected clusters based on the naive proportionalities between
the clusters. If more than one actuators is added in the previous step of the 
algorithm, the linking process is executed for each actuator. Backtracking returns all
possible orderings of the clusters.



\section{Setting initial magnitudes}
\label{sect:initial_values}

\begin{quotation}
\noindent
\textbf{Goal:} \textit{Initialize the models initial values, in order to finalize it for simulation.}\\
\textbf{Input:} \textit{The set of all quantities, all connected clusters with their
internal orderings and the system's scenario.}\\
\textbf{Output:} \textit{The required model fragments with (conditional or unconditional) value assignments and (in)equalities.}
\end{quotation}


Up to this point, we have only focused on dependencies such as
influences and proportionalities. Furthermore, only \emph{naive} dependencies
were considered (those not involving interactions or conditions).
With these dependencies, it was possible to identify clusters and link these clusters.
However, a qualitative reasoner cannot work with such a model yet: 
what is still missing are the initial values. The next step of the algorithm
checks or assigns initial values in one of several ways.

The initial amounts of all quantities need to be set before a simulation can be run.
There are six ways of setting these initial values, which are listed below.
The algorithm uses these methods to assign an initial value to every quantity in the system.
If the first option fails, the algorithm tries the next option from the list. 
The role of each of these will be discussed in more detail.

\begin{enumerate}
	\item By value assignment in the scenario
	\item Be receiving a value via correspondence with a value that is known
	\item Making calculi evaluatable via (in)equalities
	\item Using a value assignment in a conditionless model fragment
	\item Using a value assignment in a conditional model fragment
	\item Via model fragments that have value assignments as conditions
\end{enumerate}

If none of these methods succeed, the model that is being generated
is useless since it can not be simulated, and will be discarded.
Backtracking will ensure that the next possible model
will be generated.

\subsection{Check scenario value assignments}
In most situations the input scenario already contains one or more
value assignments. The algorithm first checks for every quantity,
if its value has been set in the scenario. If so, the initial value for the
quantity has been resolved. An example is shown in Figure \ref{fig:IV_scenario}, 
where quantity `Size' receives an initial value in the scenario.

\begin{figure}[ht]
	\centering
	\includegraphics[width=0.80\textwidth]{images/IV_scenario.eps}
	\caption{An initial value set in the scenario}
	\label{fig:IV_scenario}
\end{figure}



\subsection{Check value setting via correspondences}
The second option for quantities to receive a value is
via Q-correspondences. A quantity can only receive an initial
value in this way if it is directly corresponds to a known value,
or via a series of Q-correspondences. The algorithm checks
for all quantities if they are known via such a connection to a known value.
If such a connection can be found, the initial value is resolved for this quantity.

Figure \ref{fig:IV_correspondence} demonstrates a value assignment via correspondences.
Since the quantity `Land with vegetation' receives a value in the scenario,
`Land no vegetation' will also get an initial value, since it is directly
connected to `Land with vegetation' with a quantity correspondence.
`Biodiversity' also receives an initial value, since it is linked
to `Land with vegetation' via a series of correspondences (namely via `Land no vegetation').

\begin{figure}[ht]
	\centering
	\includegraphics[width=0.80\textwidth]{images/IV_correspondence.eps}
	\caption{An initial value set via correspondence}
	\label{fig:IV_correspondence}
\end{figure}

Note that this method does not only `pass through' values that are known from
the scenario, but also those that were set in any other way.
If, for example, the algorithm manages to initialize a value through a conditionless
value assignment (Section \ref{sect:conditonless_va}) all quantities that correspond with 
that quantity, also receive an initial value.
After each assignment of an initial value, all quantities that correspond with it
are no longer candidate for the initial value assignment process.


\subsection{Initializing calculi}
\label{sect:init_calculi}
Initializing calculi is just beyond the fringe of what
can be done with the algorithm. Although this method has no actual implementation,
there are some options future work can investigate.

A calculus element $Q_1 = Q_2 - Q_3$ can be evaluated when the relation between
$Q_2$ and $Q_3$ is known. If $Q_1$ can only be min, zero or plus it suffices to
know if $Q_2$ is smaller, equal or greater than $Q_3$. Let us assume this is the case, 
as is most common with flows. 
In the most simple situation this (in)equality information is given in the scenario.
This does however not have to be the case, as illustrated by the `Communicating Vessels' model.

Given that a calculus is present there must be some kind of initialization:
without it, no behavior would be possible. 
Studying all initial states, learns us what the relation between $Q_2$ and $Q_3$
was in the initial state. Besides showing what the initial (in)equalities were,
it also gives us information regarding how they might be realized.
The `Communicating Vessels' model can serve as an example.
Figure \ref{fig:calc_ineq} shows the value histories of the three quantities
involved in the min-calculus. 

\begin{figure}[ht]
	\centering
	\includegraphics[width=0.70\textwidth]{images/calc_ineq.eps}
	\caption{Value history for quantities involved in the min-calculus}
	\label{fig:calc_ineq}
\end{figure}

All states displayed are also initial states. 
States 1, 2 and 4 show that \emph{Oil left} can initially be
equal, smaller and greater than \emph{Oil right}, as indicated by the amount of \emph{Flow}.
From this information, one can conclude that there are model fragments and dependencies
that facilitate that the initial (in)equalities can assume all values. It is important
to know that when the GARP3 reasoner encounters and underivable value assignment
or (in)equality as condition it assumes it to be true. This is often used
to let a simulation assume different initial values or (in)equalities when values
are unknown in the scenario.
Consequently, the model should contain three model fragments, 
each containing a condition for on of the (in)equalities between the pressures.
Following this approach will not lead to the same model as was used for the input,
but it is definitely equivalent.

If the value history shows only one possible (in)equality between the operands,
then a conditional (in)equality is more likely. In this case
a condition would need to be selected, perhaps similar to the approach
described in Section \ref{sect:conditonal_va}.


\subsection{Conditionless value assignments}
\label{sect:conditonless_va}

The next option tried, is to find a conditionless value assignment. That is,
a value assignment that holds in all cases. An example is found in the
`Tree and Shade' model (Figure \ref{fig:conditionless_va}).

\begin{figure}[ht]
	\centering
	\includegraphics[width=0.50\textwidth]{images/conditionless_va.eps}
	\caption{A conditionless value assignment}
	\label{fig:conditionless_va}
\end{figure}

These value assignments are simple to find. For a given quantity the
algorithm checks whether the magnitude of the amount is the same in all states.
If this is so, a conditionless value assignment can be added which sets the
quantity to that magnitude. If the quantity does not consistently have the same
value, a conditionless value assignment is not applicable. 


\subsection{Conditional value assignments}
\label{sect:conditonal_va}

Subsequently, the algorithm tries to find a conditional value assignment.
This step determines which values to assign, and under which conditions.

Finding the right quantity to place the conditions on, is not straightforward.
Considering all possible quantities (or even combinations of quantities) as conditions 
would cause the search space to explode. One restriction is, that the (initial) value 
of the quantity on which the condition is placed, is known. 
The current approach takes this quite strict, by only placing conditions
on quantities that are given in the scenario. This works well for the Deforestation
model\footnote{The only used model with conditional value assignments}. However, this
approach is likely to break down when models become more complex models.

A more general approach would prefer placing conditions on
quantities that lie close to it in terms of causality. It is unlikely that
a quantity's value depends on a quantity on the `other side' of the model.
For a quantity exerting an influence, this could mean that it places conditions
on the quantity which it influences. Additionally, a condition for $Q_1$'s value 
assignment can only be placed on $Q_2$ if for every value of $Q_2$, $Q_1$ always has
the same value. Combining these restrictions would result in a method that places a
condition on the closest quantity for which a condition is consistent.


\subsection{Values as conditions in model fragments}
\label{sect:value_conditions_mf}

Finally, the algorithm should try to identify if model fragments are required
that contain value assignments as conditions. These model fragments
are often used to let unknown quantities assume certain values as a possibility (See also
Section \ref{sect:init_calculi}). Examples can be found in the `Communicating Vessels' model,
and the `Population'.

The algorithm could recognize this in the same way as was described for
finding (in)equalities for initializing calculi. If the unknown quantity has
multiple values for different initial states, this type of model fragments
need to be added.

This method of initializing initial values is, however, not yet part of the current version of the algorithm.


\par\ \\\par
If, after all these methods have been applied, some quantities still have
unknown values the generated model should be discarded. 





\section{Dependency interactions}

\begin{quotation}
\noindent
\textbf{Goal:} \textit{To handle more complex models, influence interactions are identified.}\\
\textbf{Input:} \textit{The state-graph and a set of consistency rules.}\\
\textbf{Output:} \textit{For each quantity a set of influences that are found to interact on it, if applicable.}
\end{quotation}


Models containing no conditions or interaction can 
be generated with the algorithm as described up to this point without too many problems.
To handle more complex models, the algorithm also includes
a step that identifies influence interactions. An example of influence
interactions can be found in Figure \ref{fig:infl_interact}.
To check whether it is necessary to search for interactions, the
algorithm checks whether there are not connected to any other cluster.


\begin{figure}[ht]
	\centering
	\includegraphics[width=0.50\textwidth]{images/infl_interact.eps}
	\caption{Influence interaction in the Population model}
	\label{fig:infl_interact}
\end{figure}

The reason that interactions are not found as part of the naive dependencies,
is that the effect of the individual dependencies is not consistent with
the \emph{entire} state-graph. One could say that the whole of the interacting
dependencies is greater than the sum of its parts. A method that searches
interactions should thus consider whether a combination of dependencies
is consistent with the entire state-graph.

The algorithm does this by using the same kind of consistency rules
as used for naive dependencies, but for more dependencies at once.
This is currently only implemented for influences, but is easily extendable
to interacting proportionalities.

The search for interactions assumes that all interacting dependencies are present as
opposing interacting pairs. For example: birth vs. death or immigration vs emigration. 
In some cases this is obviously not the case, for example when a population is assumed to have no immigration. In this case we assume that influences with the same sign are bundled. In the mentioned case with birth, death and emigration, emigration and death can be bundled in a quantity `population outflow'. We assume that the structure of the system  as given in the scenario and state-graph facilitates this. 

The algorithm then checks all quantities if an interaction is possible.
Obviously a future version of the algorithm should only consider
quantities that are in unconnected clusters.
For these clusters pairs of opposing dependencies are added until a set
is interacting dependencies is found that is consistent with the entire state-graph.
For this only consistency rules for opposing pairs are used. Sets with multiple
opposing pairs are considered fully covering when all states are covered,
except possibly some states where the the derivative of the actuated quantity is stable.
In these states the contributions of all interacting dependencies sum up to zero.

This is not a robust approach, and it is not meant to be.
The purpose of this approach is to investigate the possibilities of finding
interactions. A first improvement that has to be made, is that that the GARP3
engine should be used to check for consistency instead of the consistency rules.
This ensures that no exceptions are overlooked, and it also replaces the
need for the questionable assumption that some states do not have to be covered
when the actuated quantity's derivative is stable.

We return to the algorithm: sets of interactions are now returned.
To ensure that these sets of interacting dependencies are minimal, subsets are removed.
That is, if a certain opposing pair covers part of the state-graph, but another 
pair covers an even larger portion of the state-graph, the former is removed.
It is redundant due to the presence of the second pair.

In addition the returned sets of interactions may contain conflicting
dependencies. For instance: when ran on the `Communicating Vessels' model, 
the interaction search returns, among other, the following interactions:

$Pressure (left)  \stackrel{I+}{\rightarrow} Flow \hbox{ and } Pressure (left)  \stackrel{I-}{\rightarrow} Flow$

Having two dependencies of opposite sign running in the same direction is definitely
a conflict. Rules that detect these and other conflicts are used to filter out all 
sets that contain conflicts.

Using the above method all influence interactions can be found in the `Population'
model, in both an closed and an open population (without and with emigration/immigration).




\section{Conditional dependencies}
\label{sect:conditions}

Dependencies that only hold under certain conditions where already discussed
in the context of initial magnitudes. However, a more general approach should
be adopted that can also handle other conditional dependencies. 

Even though conditional causal dependencies are considered bad practice,
systems that do include such constructions should not be excluded.
Currently, the algorithm does not provide possibilities for
all types of conditional dependencies, and future work should
further investigate this area.


\section{Adding correspondences}

\begin{quotation}
\noindent
\textbf{Goal:} \textit{Add correspondences that were used up to here.}\\
\textbf{Input:} \textit{The complete model up to this point, and the naive dependencies.}\\
\textbf{Output:} \textit{A set of correspondences that is added to the model.}
\end{quotation}

Now that a model is built up, all correspondences that have been used in the 
process are added to the model. This includes correspondences used
in clusters and those that were used for `passing through' values 
in setting the initial magnitudes.

\section{Model evaluation}

\begin{quotation}
\noindent
\textbf{Goal:} \textit{Assessing how good the output model is.}\\
\textbf{Input:} \textit{The original model, its behavior and the generated model.}\\
\textbf{Output:} \textit{The generated model, and an evaluation of how good it is.}
\end{quotation}
 
When the complete model generation process has finished, it can be evaluated.
This is currently done in two ways. First of all, the model is output
to GARP3, and a simulation is run. Based on the simulation, the user
can judge whether the simulation's output state-graph (sufficiently) conforms with the
input state-graph. Additionally, the output model itself is compared to 
the model that was used to generate the input behavior.
Eventually this evaluation process should be automated, or should at least
minimize user intervention.


\chapter{Results}

The description of the algorithm discusses the current implementation
of the algorithm, but also makes concrete suggestions for improvements.
To emphasize that the current algorithm already achieved promising results,
this chapter will briefly describe what systems can be automatically modeled with it.

\section{Tree and shade growth}
The tree and shade system can be successfully modeled
with the algorithm. Two models are found, representing both ways
a causal relation can exist between Shade and Size.
The initial magnitude assignment correctly finds a conditionless
value assignment on Growth rate.

When the simulation is performed, the same behavior is output as was input to
the algorithm.


\section{Communicating vessels}

The dependencies in this model are correctly found, and the algorithm
returns six possible models. One for each possible causal ordering within
the amount-height-pressure cluster.

The algorithm correctly identifies the ESM-based actuations of the
clusters, by properly finding the min-calculus. Furthermore all
necessary causal dependencies and correspondences are identified.
The only thing missing are the model fragments that set the initial values,
and initialize the calculus. The original model uses a combination of conditional 
model fragments and equivalences, which initialize the calculus using transitivity
of the (in)equality relations. The algorithm does not implement
methods for finding such complex structures yet, however sections \ref{sect:init_calculi} and \ref{sect:value_conditions_mf} blueprint a method to identify conditional model fragments.
This would already solve most part of the problem. As a consequence this difference with the desired model is only minor.

Since no initial values will be set, the input scenario will not run.
However, when the generated model is simulated using a scenario that
provides an (in)equality between Pressure (left) and Pressure (right),
the simulation runs without problems. This emphasizes that
the major part of the generated model is correct.


\section{Deforestation}

The Deforestation model (containing entities `Woodcutters', `Vegetation', `Water', `Land' and `Humans') is successfully modeled, including setting initial magnitudes using conditions.
The simulation output behavior is the same as the input behavior. The causal ordering
does differ, in that it does not capture the branching of causal paths in the original model.
It is, however, questionable if the model used for input is the best way to represent
the deforestation process. This is an interesting example, where strictly following
the theory as the algorithm does, arguably gives better results than human modeling.

On backtracking more than 2000 models are returned, which is a consequence
of all possible causal orderings in and between clusters.

\section{Population dynamics}

The Population model contains a scenario for a closed population (only birth and death)
and an open population (also immigration and emigration). 
In both models, all dependencies that explain the behavior
of the system are correctly found. This includes the interacting influences
on the Size quantity. 

Simulation does not provide the correct output behavior. As in the Communicating
Vessels model, this results from the algorithm not finding the initial values.

\section{Heating liquids, Rstar and Ant's Garden}
Running the algorithm on the Heating liquids, Rstar and Ant's Garden
models did not result in useful output. 

The Heating liquids models contains many dependencies and constructs, 
such as value correspondences and (in)equalities that apply
under other (in)equality conditions, which fall outside the
scope that was set out for this thesis. It is therefore not surprising
that the algorithm does not perform well on these models.

Both Rstar and the Ant's garden models are models that resulted from 
research in a specific application domain, 
where the other models are normally used to illustrate QR.
As a result these models are an order of magnitude more complex.


\chapter{Conclusions \& Discussion}
\label{chapt:discussion}

This thesis has addressed the need for a solution to automated modeling in qualitative reasoning. This chapter discusses its results and proposes future work.

\section{Conclusions}

Building on previous work in the field of QR, and especially the GARP3 workbench, 
this thesis describes an algorithm that accommodates the need for automated model building 
with a focus on cause-effect relations. To achieve this goal, several existing
models of increasing complexity were studied, to find relations between
the semantics of QR primitives and possible outputs. 

The algorithm was designed by iteratively improving it based on studying
the different aspects of the analyzed models. As a result the algorithm
itself follows an iterative approach to model building. At first
the simple, naive dependencies are identified, from which the the algorithm
extends the model the capture more complex causal relations.

To find dependencies between quantities, consistency rules are used to 
assess which dependencies are wholly or partially consistent with the input behavior. 
In addition, clusters are used to structure and restrict the search space. 
These groups of equivalent quantities, partition the search space in smaller
sub parts and thus reduces the amount of dependencies that can be added.
Within and between these clusters, causal paths are identified to form a complete model.
Finally all quantities are checked to verify whether their initial value is known.
If not, the algorithm sets the initial values by means of (conditional or unconditional) value
assignments.

For evaluation, output models are compared to the correct models. Additionally
the output models are simulated using GARP3, to compare their behavior to the
original input behavior.

The algorithm outputs correct models for the `Tree and Shade', `Communicating Vessels' and `Population' models 
and is close to producing correct output on the `Heating liquids' model. In addition it illustrates and applies methods for constraining the complex search space of possible models. 

\section{Discussion}

In addition to these promising results, they also indicate that the algorithm can 
not yet handle all types of models. 
First of all, the algorithm can not yet satisfactorily handle all types of conditions. As a consequence there are quite a few models that cannot be generated. However, Section \ref{sect:initial_values} proposes some possibilities to extend the current algorithm to deal with conditions in the context of initial values. 
Similarly, a concrete approach to finding (in)equality relations has not been added
to the algorithm. More investigation should be done to assess in which settings 
conditions and (in)equalities appear. Based on this information solutions can be sought for these issues.

Finally, it should be noted that the current approach of dealing with interactions.
Currently the algorithm uses consistency rules to find which interactions are necessary
to explain the data. This approach does not suffice for models with many interactions on one quantity, because it would require more and more rules. Letting GARP3 verify whether certain interactions are consistent with the states from the state-graph, would resolve this problem.

The current method works for most of the analyzed models, but it could be improved
by using the GARP3 engine to check the consistency of dependency interactions.



\section{Future work}

In order to extend the current algorithm to a full-fledged automated model building
algorithm, future work can research several issues. 


This thesis has also shown that it will not be possible to return
exactly one model for each input (see Section \ref{sect:causality_direction}). This is not
a shortcoming of the algorithm, but rather an issue inherent to the common problem
of resolving ambiguity in causality. An interesting way to resolve this, 
could be to increase the interaction between the user (an expert) and the algorithm. 
The algorithm might request specific information from the user to resolve ambiguities regarding, 
for example, causality. 


In addition to interaction with the user, a closer connection with the GARP3 engine
should be established. The engine could simulate partially complete
models during the building process and thus giving feedback to direct the remainder of
the building process. 


Furthermore, the current output models could be improved by splitting the model
in parts based on re-usability. Splitting the model in smaller model 
fragments that are reusable will make the model more insightful and more general.


Another issue concerning the output is that if multiple models are consistent,
it will just list them all. A more compact output would be desirable. Most 
variation in the multiple models, lies in the causal ordering within clusters
and between clusters. Thus, it would be more intuitive if the algorithm
just outputs the different cluster and a list of constraints placed on the orderings.


Additionally it may be possible to make heuristic assertions regarding the relative likelihood
of different models. If such heuristics can be found, they can be used as a ranking function
for the final output, such that the most likely models are output first.


Finally, the state-graph offers more information than is currently used by the 
algorithm. A good example are branching points in the state-graph. The fact
that branching occurs might indicate that certain variables or unknown or note restricted.
Since the current approach mainly studies information within states and not
between states, this information is not used. Making use of this information
may make the algorithm more robust.


\chapter{Acknowledgments}
The eight week project that resulted in this thesis would not have been possible
without the guidance of my first and second supervisors Bert Bredeweg and Jochem Liem.
In addition I would like to thank Floris Linnebank for giving me a clear insight in the
workings of the GARP3 engine.

In addition, I would also like to thank my friends, and especially my family for being supportive during the past years of study. Their views have been a great incentive for me.

Finally, I would like to thank my teachers and professors, for their guidance and inspiration during the three years of my Bachelor at the University of Amsterdam. 


\bibliographystyle{abbrv}
\bibliography{ref}


\end{document}
