%********************************************************************
% Appendix
%*******************************************************
% If problems with the headers: get headings in appendix etc. right
\markboth{\spacedlowsmallcaps{Appendix}}{\spacedlowsmallcaps{Appendix}}
%\chapter{Appendix Test}
%************************************************
\chapter{Landscape evolution framework: more details on the model}

\begin{flushright}{\slshape    
    We have seen that computer programming is an art, \\ 
    because it applies accumulated knowledge to the world, \\ 
    because it requires skill and ingenuity, and especially \\
    because it produces objects of beauty.} \\ \medskip
    --- \citeauthor{knuth:1974}, \citetitle{knuth:1974},
\citeyear{knuth:1974} 
\end{flushright}

\section{Technical overview on the model software}
\label{appChap:theModel}
As said in \mySecNoSpace{sec:extractionHydroNet}, we created a new
software to perform the extraction of river networks from an
arbitrary landscape represented by a \ac{DEM}. The software
performs the operations described in the already cited 
\mySecNoSpace{sec:extractionHydroNet} and is provided with an
interface that allows the comunication with the optimizer, \ie the
\ac{MOEA} framework \cite{moeaframework:2013}.

The baseline for this job is the original code used by
\citeauthor{paik:2011} in \cite{paik:2011}, which the author
kindly gave us. It is written in Fortran 95, is composed by 7
subroutine and the main body, which defines the main operation the
software is able to perform. The configuration of the model is
hardly coded and a change in parameters value requires a new
compilation.

There are several reasons to rewrite the software. In
\mySecNoSpace{sec:extractionHydroNet} we mentioned the need of
taking control on each part of the model and being able to change
it to test new configurations. The other reasons concern the
coupling with the optimization software and the need for easing
the reusability and maintainability of the software. For these
point we choose to re-implement the functionality of Paik's
software within an object oriented framework that allows code
reusing by definition.

In fact, object oriented is a programming paradigm that represents
concepts as \enquote{objects} that have data fields (attributes
that describe the object) and associated procedures known as
methods. Objects, which are usually instances of classes, are used
to interact with one another to design applications and computer
programs \cite{kindler_object:2011}.

We chose to implement this architecture in the \ac{C++}
programming language because of its striking performances and for
the availability of state of the art libraries to perform common
operations. An example of this is the use of the \acf{YAML} to
format the parameter setup: a solid library for \ac{C++} is
available to perform operations within that data format.

\subsection{Software architecture}
We chose to design the new software as a static library of
functionalities which can be used for various operations. In fact
we needed the same functionalities to be linked within the \ac{MOEA}
framework and to be used again when analyzing the landscapes it
produced as outputs.

The core operations performed by the software are already
described in the main part of the thesis. The main functionality
which is not included is the data handling class \ie the class
OurMatrix. Its header with the data containers and its operators
is shown in Listing \ref{lst:OurMatrix}.

\lstinputlisting[firstline=1,
lastline=46, float=tb, language=C++, tabsize=4, numbers=left,
numberstyle=\tiny, stepnumber=2, numbersep=5pt, caption={Header of
the template class OurMatrix.}, captionpos=t,
label=lst:OurMatrix]{CodeFiles/OurMatrix.h}

The library included into the software is the \textit{yaml-cpp}
library, a \ac{YAML} parser and emitter in \ac{C++} matching the
\ac{YAML} 1.2 spec. Source code and use instructions are available
from \url{http://code.google.com/p/yaml-cpp/} under \acs{MIT}
license. It requires the headers of Boost \ac{C++} libraries,
which are \enquote{free peer-reviewed portable \ac{C++} source
libraries} and very common when using \ac{C++}.

\subsection{How to write good software: testing and evaluating}
To test the software we wrote, we rely on the Google \ac{C++}
Testing framework. The site says that it is
the \blockquote{Google's framework for writing \ac{C++} tests on a
variety of platforms (Linux, Mac OS X, Windows, Cygwin, Windows
CE, and Symbian). Based on the xUnit architecture. Supports
automatic test discovery, a rich set of assertions, user-defined
assertions, death tests, fatal and non-fatal failures, value- and
type-parameterized tests, various options for running the tests,
and \ac{XML} test report generation.} It is available under the
\enquote{New BSD
License}\footnote{\url{http://opensource.org/licenses/BSD-3-Clause}}
at \url{http://code.google.com/p/googletest/}. Within the testing
framework, we performed eighteen tests on the core functionalities
of the software. $658$ assertions have been evaluated to ensure
code correctness both while writing is and when deployment has
been done.

The model relies on a certain amount of data which must be
produced and held during the execution of the software. An
optimization run usually lasts for tens of hours and therefore the
memory management becomes an important issue. Assessment of its
correctness and coherence has been performed using Valgrind.
\blockquote{Valgrind is an instrumentation framework for building
dynamic analysis tools. There are Valgrind tools that can
automatically detect many memory management and threading bugs,
and profile your programs in detail. It runs on the following
platforms: X86/Linux, [\ldots]. Valgrind is Open Source / Free
Software, and is freely available under the GNU General Public
License, version 2.}

Performance is also a critical issue: during the optimization, the
operations performed by the model are executed millions of times.
Therefore we analyzed performance with a profiler,
\emph{gprof}. Profiling allows you to learn where the program
spent its time and which functions called which other functions
while it was executing. This information can show which pieces of
your program are slower than expected and might be candidates for
rewriting to make the program execute faster.

The profiling analysis showed that the model spend most of the
execution time in extraction of flow direction. Some time is spent
also during the depression filling: the amount spent is influnced
greatly by the smoothness of the landscape and by the dimension of
the depressions. After some improvements, the average perfomance
in experiment \myExpOne has been about $40$ ms per function
evaluation \ie per depression filling, flow routing and objectives
evaluation.

\section{Spatial interpolation: IDW}
\label{appChap:IDW}

\acf{IDW} is a type of deterministic method for multivariate
interpolation with a known scattered set of points. The assigned
values to unknown points are calculated with a weighted average of
the values available at the known points. The applied weight is an
inverse function of the distance between the point to be
calculated and each of the known points.

The expected result of the integration of \ac{IDW} in our framework 
model is a discrete assignment of the searched function 
$f(\cdot)$, sampled by the optimizer in the \ac{DEM} domain:
\begin{equation}
f(x,y): (x,y) \rightarrow \mathbb{R}, \quad (x,y) \in \mathbf{D}
\subset \mathbb{R}^2
\end{equation}
where $\mathbf{D}$ is the study area \ie the \ac{DEM}. The set
of $N$ known data points can be described as a list of tuples:
\begin{equation}
\left[(x_1, y_1, z_1), (x_2, y_2, z_2), \ldots, (x_N, y_N,
z_N)\right].
\end{equation}

We applied the \citeauthor{shepard_IDW:1968} method
\cite{shepard_IDW:1968}. The estimated value of function
$f(\cdot)$ at a given (not sampled) point is
\begin{equation}
f(x, y) = \sum_{i = 0}^{N}{ \frac{ w_i(x, y) z_i } { \sum_{j
= 0}^{N}{ w_j(x, y) } } }
\end{equation}
where
\begin{equation}
w_i(x, y) = \frac{1}{d(x,y,x_i,y_i)^p}
\end{equation}
being $p$ a weighting exponent, $d(\cdot)$ the euclidean distance
and $(x_i, y_i)$ is a sampled point. Using this method, the
function is smoothed continuously, is once differentiable, and
passes through the sampled points ($f(x_i, y_i) = z_i$).

We used the \ac{IDW} interpolation method with $p=2$ and a fixed
grid of sampling points: one point each ten cell in the \ac{DEM}.
We also implemented a maximum window outside which sampled points
are not considered \ie they have weight $0$ by default, in order
to speed up the interpolation process. The window dimensions were 
set to $31 \times 31$ cells, centered in the point to be
evaluated. 

\section{Constraint feasibility}
\label{appChap:constraint}
\citeauthor{paik:2011}'s \ac{GLE} model features a constraint called
\enquote{tectonic condition} based on the hypothesis that the
mass gained by the uplift is the same as the total loss of sediment
mass from the whole landscape. This requirement also means that
the sum of elevations is still the same during the optimization
process.

This constraint has proved to be very challenging:
\myFig{fig:massConstraintFeasibility} in
\myChapNoSpace{chap:instanceOfFramework} shows the probability of
randomly choose a landscape that fulfill the constraint. 
The figure is based on the evaluation of that probability, 
given a discrete set of variation in the elevation values 
that can be applied to each cell of the \ac{DEM} during 
the whole optimization.

To evaluate the probability and build the figure, we used this
procedure: the variation in elevation at each cell can be treated
as a random variable $x$, whose possible values are the discrete
integer in the range of variation, \eg $X \in \Omega = \{-1, 0,
1\} $. The probability distribution of this variable is a uniform
one: therefore, in a one-cell \ac{DEM} the condition to satisfy
the mass constraint is $X = 0$ and its probability is $P[X = 0] =
\frac{1}{K}$, $K$ being the number of values that $X$ can take \ie
the cardinality of its range of variation.\footnote{In this
section, the capital letter indicates the random variable, while
the lower case letter indicates the realization of the random
variable, when its value is set (and it is no more a random
variable). Moreover, the probability that the variable $X$ assumes
a certain value, say $0$, is expressed as $P[X = 0]$ or $p_X(0)$,
$p_X(\cdot)$ being the probability distribution function or pdf.
The Greek letter indicates its set of definition.}

When the \ac{DEM} is composed by more than one cell, the mass
constraint condition becomes:
\begin{equation}
z_N = \sum_i^N x_i = 0
\label{eq:massConstraint}
\end{equation}
where $x_i$ is the variation at cell $i$ and $N$ is the total
number of cell in the \ac{DEM}. Being $X \in \Omega$ a random
variable and each $X_i$ being the same \ie each cell have the
same possible variations, $Z_N$ is also a random variable and
specifically is the sum of $N$ random variables. Its distribution
of probability can be calculated iteratively, a method suitable to
be implemented in a computer. In fact,
\begin{equation}
P[Z_N = z] = p_{Z_N}(z) = \sum_{\forall x \in \Omega}
p_{Z_{(N-1)}}(z - x)p_X(x)
\end{equation}
and of course, $P[Z_1 = z] = p_{Z_1}(z) = p_{X}(z)$. For example,
checking the probability of fullfilling the mass constraint on a
$3 \times 1$ \ac{DEM} means to evaluate the following
\myEqNoSPace{eq:exampleMassConstraint} with $\Omega = \{-1, 0,
1\}$,
\begin{align}
P[Z_3 = 0] &= p_{Z_3}(0) = \sum_{\forall x \in \Omega} p_{Z_2}(0 -
x)p_X(x) = \\
&= \left( p_{Z_2}(1) p_X(-1)\right) + \left( p_{Z_2}(0)
p_X(0)\right) + \left( p_{Z_2}(- 1) p_X(1)\right) \notag
\label{eq:exampleMassConstraint}
\end{align}
and each $p_{Z_2}(z)$ terms is
\begin{align}
P[Z_2 = z] &= p_{Z_2}(z) = \sum_{\forall x \in \Omega} p_{Z_1}(z -
x)p_X(x) = \\
&= \left( p_{Z_1}(z - 1) p_X(-1)\right) + \left( p_{Z_1}(z)
p_X(0)\right) + \notag\\
&+ \left( p_{Z_1}(z + 1) p_X(1)\right). \notag
\label{eq:exampleMassConstraint}
\end{align}
It's also clear from this example the limit of this method: to
evaluate the probability of the sum of three random variables, it is
necessary to evaluate other eight probabilities.

Including into the dissertation the effect of the tolerance value
is simple: the probability of the event $A$ \enquote{a \ac{DEM}
fulfill the constraint} becomes
\begin{equation}
P(A) = \sum_{j=-R}^R P[Z_N = j] = \sum_{j=-R}^R p_{Z_N}(j)
\end{equation}
being $R$ the range of elevation sums that fulfill the constraint.

The implementation of this simple method is able to evaluate the
probability of randomly select of correct landscape for \ac{DEM}
dimension of $51x51$ and $\Omega$ spanning discreetly from $-25$
to $+25$ meters. A code snippet of the function which evaluates
the probability for a given order is shown in Listing
\ref{lst:probCounter}.

\lstinputlisting[firstline=1,
lastline=47, float=tb, language=C++, tabsize=4, numbers=left,
numberstyle=\tiny, stepnumber=2, numbersep=5pt, caption={Code
snippet with the recursive function to evaluate the pdf of the sum
$Z_N$ of $N$ random variables equal to $X$.}, captionpos=t,
label=lst:probCounter]{CodeFiles/probabilityCounter.cpp}

The results of the evaluation are shown in
\myFigNoSpace{fig:massConstraintFeasibility}.

\chapter{Landscape evolution framework: a bit of history on MOEAs}
\label{appChap:moeas}

\section{$\varepsilon$-NSGAII}
\ac{eNSGAII} is an \ac{MOEA} built on the \ac{NSGAII}, with
the additional capabilities of $\varepsilon$-dominance archiving,
adaptive population sizing and automatic termination to minimize
the need for extensive parameter calibration
\cite{kollat_comparing:2006}. The ancestors of \ac{eNSGAII} go
back to \ac{NSGA}, one of the first \ac{MOEA} to be published into
a journal\cite{coello:2006}. Recalling this history, the
ideas that led to the development of \ac{MOEA} will be shown.

\subsection{NSGA}
\ac{NSGA} was created by \citeauthor{srinivas_NSGA:1994}
\cite{srinivas_NSGA:1994} and was one of the first \ac{MOEA}
developed. Actually, the first hint regarding the possibility of
using \acp{EA} to solve a \acp{MOP} appears in a PhD thesis from
\citeyear{rosenberg_simulation:1967}
\cite{rosenberg_simulation:1967}.
\citeauthor{schaffer_VEGA:1985} \cite{schaffer_VEGA:1985} is the
first to develop an \ac{MOEA} during the mid-1980s: the \ac{VEGA}.
It consists of a simple genetic algorithm with a modified
selection mechanism. At each generation, a number of
sub-populations are generated by performing proportional selection
according to each objective function in turn. These
sub-populations are then shuffled together to obtain a new
population, on which the \ac{EA} apply the crossover and mutation
operators in the usual way.\footnote{Further reading and
explanation of the usual way are provided in
\cite{schaffer_VEGA:1985} and in the general overview from
\citeauthor{coello:2006} \cite{coello:2006}.}

The basic idea for the \ac{NSGA} comes from
\citeauthor{goldberg_genetic:1989} \cite{goldberg_genetic:1989}.
He suggested the use of nondominated ranking and selection: the
idea is to find the set of solutions in the population that are
Pareto nondominated by the rest of the population. These solutions
are then assigned the highest rank and eliminated from further
contention. Another set of Pareto nondominated solutions is
determined from the remaining population and are assigned the next
highest rank. This process continues until all the population is
suitably ranked.

\ac{NSGA} is based on several layers of classification of the
individuals as suggested by \citeauthor{goldberg_genetic:1989}
and a dummy fitness value, proportional to the population size, is
assigned to each of them, in order to provide an equal
reproductive potential. To maintain the diversity of the
population, these classified individuals are shared with their
dummy fitness values. Since individuals in the first front have
the maximum fitness value, they always get more copies than the
rest of the population.

The algorithm of the \ac{NSGA} is not very efficient, because
Pareto ranking has to be repeated over an over again. Evidently,
it is possible to achieve the same goal in a more efficient way.

\subsection{NSGAII}
\ac{NSGAII} is a second generation \ac{MOEA} developed by
\citeauthor{deb_NSGAII:2002} \cite{deb_NSGAII:2002}. Compared
to \ac{NSGA} it uses a more efficient non-domination sorting
scheme, eliminates the sharing parameter and adds an implicitly
elitist selection method that greatly aids in capturing Pareto
surfaces. The enforced algorithm of tournament selection helps
in finding solutions along the full extent of the Pareto surface.
\citeauthor{deb_scalable:2005} \cite{deb_scalable:2005} have shown
that the \ac{NSGAII} performed as well as or better than other
second-generation \acp{MOEA}.

\subsection{$\varepsilon$-NSGAII}
%Con eventualmente immagine di \cite{kollat_comparing:2006}, pag
%798
\ac{eNSGAII} created by \citeauthor{kollat_comparing:2006}
adds $\varepsilon$-dominance archiving, adaptive population sizing
and automatic termination to the solid basis of the \ac{NSGAII}.
The concept of $\varepsilon$-dominance is explained below in
\mySubsec{subs:epsilonDominance}. The population size is
automatically adapted based on the number of non-dominated
solutions found. These solutions are stored in an archive and
used to direct the search by re-injection. In the injection
scheme, 25\% of the subsequent population will be composed of the
$\varepsilon$-non-dominated solutions taken from the archive. This
assists the search by directing it using previously evolved
solutions and by adding new solutions to encourage the exploration
of additional regions of the search space. The search is
terminated across all populations used if the number and quality
of solutions has not increased above $\Delta$\% across two
successive runs. The primary goal of \ac{eNSGAII} is to provide a
highly reliable and efficient MOEA which minimizes the need for
parameterization \cite{kollat_comparing:2006}.

The parameters required to set up an optimization run of the
\ac{eNSGAII} are the initial population size, the maximum
\ac{NFE}, the injection rate into the archive and the parameters
related to simulated binary crossover and mutation operator.
Suggested possible values for these parameters are shown in 
\myChap{chap:instanceOfFramework}.

\section{GDE3}
As the name suggest, the \acl{GDE3} is the third improvement of
the Generalized version of \acl{DE} \ac{EA}
\cite{storn_differential:2005}.

\subsection{DE}
The design principles in the \ac{DE} algorithm were simplicity,
efficiency, and the use of floating-point encoding instead of
binary numbers. Like a typical \ac{EA}, \ac{DE} has some random
initial population, which is then improved using selection,
mutation, and crossover operations. The stopping criterion is
usually a predefined upper limit for the number of generations or
function evaluations.

The basic idea of \ac{DE} is that the mutation is self-adaptive to
the objective function surface and to the current population. At
the beginning of generations the magnitude of the mutation is
large because vectors in the population are far away in the search
space. When evolution proceeds and the population converges, the
magnitude of the mutation gets smaller.

\subsection{GDE3}
\ac{GDE3} \cite{kukkonen_fast:2006} is a multiobjective
variant of the \ac{DE} algorithm. \ac{GDE3} was one of the top
rated in a competition for \acp{MOEA} \cite{zhang_final:2009}.

Among the characteristics features of \ac{GDE3} there is its
mutation operator which uses the scaled ‘‘difference’’ between two
population members’ decision variable vectors to generated new
candidate solutions. This operator is termed rotationally
invariant and it does not assume explicit search directions when
it creates new solutions. It also means that \ac{GDE3} does not
require decisions to be separable and independent \ie, they can
have conditional dependencies.\footnote{The \ac{SBX} operator
used by \ac{eNSGAII} assumes problems have independent decisions
that can be optimized using only vertical or horizontal
translations of the decision variables.}

Another interesting feature of \ac{GDE3} is its constraint
handling method: it reduces the number of needed function
evaluations, being more efficient in finding solutions for
constrained problem.

The parameters required to set up an optimization run of the
\ac{GDE3} are: initial population size, maximum \ac{NFE},
 crossover rate and step size of \ac{DE} operator. With
only four parameters, \ac{GDE3} appears to be very suitable for
applications. As for \ac{eNSGAII}, suggested possible values for
these parameters are shown in \myTab{tab:MOEAandParameters} 
in \myChap{chap:instanceOfFramework}.

\chapter{Landscape evolution framework: all the results}
\label{appChap:allResults}

\section{First experiment: \myExpOneNoSpace}
The Horton's bifurcation ratio statistical distribution for the
basin within the clusters analyzed in
\mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:MOGLE_HortonBifurcClusters}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\columnwidth]{Images/MOGLE_HortonBifurcRatio_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $58$
& $382$ & $185$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $3.954
\pm 2.438$ & $4.106 \pm 2.132$ & $3.048 \pm 0.7599$ \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cdf within range} & $0.3273$ &
$0.2657$ & $0.2959$\\
\bottomrule
\end{tabularx}

\caption[Statistical distribution of Horton's bifurcation
ratio for the clusters of experiment
\myExpOneNoSpace]{Statistical distribution of Horton's
bifurcation ratio for the clusters of experiment
\myExpOneNoSpace. For more detailed informations about the
image, refer to \myFigNoSpace{fig:RaClusters_MOGLE}.}
\label{fig:MOGLE_HortonBifurcClusters}
\end{figure}

Horton's length ratio statistical distribution for the
basin within the clusters analyzed in
\mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:MOGLE_HortonLengthClusters}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\columnwidth]{Images/MOGLE_HortonLengthRatio_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $58$
& $382$ & $185$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $2.319 \pm
1.284$ & $2.401 \pm 1.210$ & $1.888 \pm 0.7986$ \\	
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cdf within range} & $0.6179$ &
$0.6042$ & $0.5262$\\
\bottomrule
\end{tabularx}

\caption[Statistical distribution of Horton's length ratio for
the clusters of experiment \myExpOneNoSpace]{Statistical
distribution of Horton's length ratio for the clusters of
experiment \myExpOneNoSpace. For more detailed informations about
the image, refer to \myFigNoSpace{fig:RaClusters_MOGLE}.}
\label{fig:MOGLE_HortonLengthClusters}
\end{figure}

Horton's slope ratio statistical distribution for the
basin within the clusters analyzed in
\mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:MOGLE_HortonSlopeClusters}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\columnwidth]{Images/MOGLE_HortonSlopeRatio_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $58$
& $382$ & $185$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $5.560 \pm
22.52$ & $1.006 \pm 1.279$ & $0.9899 \pm 0.1487$ \\	
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cdf within range} & $0.0$ &
$0.0$ & $0.0$\\
\bottomrule
\end{tabularx}

\caption[Statistical distribution of Horton's slope ratio for the
clusters of experiment \myExpOneNoSpace]{Statistical
distribution of Horton's slope ratio for the clusters of
experiment \myExpOneNoSpace. For more detailed informations about
the image, refer to \myFigNoSpace{fig:RaClusters_MOGLE}.}
\label{fig:MOGLE_HortonSlopeClusters}
\end{figure}

Contributign area exponenent statistical distribution for the
basin within the clusters analyzed in
\mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:MOGLE_AreaExponentClusters}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\columnwidth]{Images/MOGLE_AreaExponents_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $58$
& $382$ & $185$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $-0.5533
\pm 0.1012$ & $-0.5508 \pm 0.07141$ & $- \pm -$ \\	
\bottomrule
\end{tabularx}

\caption[Statistical distribution of contributign area exponenent
for the clusters of experiment \myExpOneNoSpace]{Statistical
distribution of contributign area exponenent for the clusters of
experiment \myExpOneNoSpace. For more detailed informations about
the image, refer to \myFigNoSpace{fig:RaClusters_MOGLE}.}
\label{fig:MOGLE_AreaExponentClusters}
\end{figure}

\section{Second experiment: \myExpTwoNoSpace}
The Horton's bifurcation ratio statistical distribution for the
basin within the clusters analyzed in
\mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:RAINY_HortonBifurcClusters}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\columnwidth]{Images/RAINY_HortonBifurcRatio_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $58$
& $382$ & $185$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $3.954
\pm 2.438$ & $4.106 \pm 2.132$ & $3.048 \pm 0.7599$ \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cdf within range} & $0.3273$ &
$0.2657$ & $0.2959$\\
\bottomrule
\end{tabularx}

\caption[Statistical distribution of Horton's bifurcation
ratio for the clusters of experiment
\myExpTwoNoSpace]{Statistical distribution of Horton's
bifurcation ratio for the clusters of experiment
\myExpTwoNoSpace. For more detailed informations about the
image, refer to \myFigNoSpace{fig:RAINY_HackExponents_clusters}.}
\label{fig:RAINY_HortonBifurcClusters}
\end{figure}

Horton's length ratio statistical distribution for the
basin within the clusters analyzed in
\mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:RAINY_HortonLengthClusters}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\columnwidth]{Images/RAINY_HortonLengthRatio_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $58$
& $382$ & $185$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $2.319 \pm
1.284$ & $2.401 \pm 1.210$ & $1.888 \pm 0.7986$ \\	
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cdf within range} & $0.6179$ &
$0.6042$ & $0.5262$\\
\bottomrule
\end{tabularx}

\caption[Statistical distribution of Horton's length ratio for
the clusters of experiment \myExpTwoNoSpace]{Statistical
distribution of Horton's length ratio for the clusters of
experiment \myExpTwoNoSpace. For more detailed informations about
the image, refer to \myFigNoSpace{fig:RAINY_HackExponents_clusters}.}
\label{fig:RAINY_HortonLengthClusters}
\end{figure}

Horton's slope ratio statistical distribution for the
basin within the clusters analyzed in
\mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:RAINY_HortonSlopeClusters}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\columnwidth]{Images/RAINY_HortonSlopeRatio_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $58$
& $382$ & $185$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $5.560 \pm
22.52$ & $1.006 \pm 1.279$ & $0.9899 \pm 0.1487$ \\	
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cdf within range} & $0.0$ &
$0.0$ & $0.0$\\
\bottomrule
\end{tabularx}

\caption[Statistical distribution of Horton's slope ratio for the
clusters of experiment \myExpTwoNoSpace]{Statistical
distribution of Horton's slope ratio for the clusters of
experiment \myExpTwoNoSpace. For more detailed informations about
the image, refer to \myFigNoSpace{fig:RAINY_HackExponents_clusters}.}
\label{fig:RAINY_HortonSlopeClusters}
\end{figure}

Contributign area exponenent statistical distribution for the
basin within the clusters analyzed in
\mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:RAINY_AreaExponentClusters}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\columnwidth]{Images/RAINY_AreaExponents_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $58$
& $382$ & $185$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $-0.5533
\pm 0.1012$ & $-0.5508 \pm 0.07141$ & $- \pm -$ \\	
\bottomrule
\end{tabularx}

\caption[Statistical distribution of contributign area exponenent
for the clusters of experiment \myExpOneNoSpace]{Statistical
distribution of contributign area exponenent for the clusters of
experiment \myExpTwoNoSpace. For more detailed informations about
the image, refer to
\myFigNoSpace{fig:RAINY_HackExponents_clusters}.}
\label{fig:RAINY_AreaExponentClusters}
\end{figure}

\section{Third experiment: \myExpThreeNoSpace}
The Horton's bifurcation ratio statistical distribution for the
basin within the clusters analyzed in
\mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:HortonBifurcClusters_IDW}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\columnwidth]{Images/HortonBifurcRatio_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $40$
& $118$ & $40$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $2.755
\pm 0.6584$ & $3.750 \pm 1.112$ & $3.357 \pm 0.9667$ \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cdf within range} & $0.3839$ &
$0.5297$ & $0.5178$\\
\bottomrule
\end{tabularx}

\caption[Statistical distribution of Horton's bifurcation
ratio for the clusters of experiment
\myExpThreeNoSpace]{Statistical distribution of Horton's
bifurcation ratio for the clusters of experiment
\myExpThreeNoSpace. For more detailed informations about the
image, refer to \myFigNoSpace{fig:HackClusters_IDW}.}
\label{fig:HortonBifurcClusters_IDW}
\end{figure}

Horton's length ratio statistical distribution for the
basin within the clusters analyzed in
\mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:HortonLengthClusters_IDW}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\columnwidth]{Images/HortonLengthRatio_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $40$
& $118$ & $40$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $1.203 \pm 0.2007$ & $1.324 \pm 0.4964$ & $1.367
\pm 0.4618$ \\	
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cdf within range} & $0.1541$ & $0.3673$ & $0.3387$\\
\bottomrule
\end{tabularx}

\caption[Statistical distribution of Horton's length ratio for the
clusters of experiment \myExpThreeNoSpace]{Statistical
distribution of Horton's length ratio for the clusters of
experiment \myExpThreeNoSpace. For more detailed informations about the image, refer
to \myFigNoSpace{fig:HackClusters_IDW}.}
\label{fig:HortonLengthClusters_IDW}
\end{figure}

The statistical distribution of the exponents of the
probability distribution of drained area for the basin within the
clusters analyzed in \mySubsec{subs:IDWclustering} is showed in
\myFigNoSpace{fig:AreaExpClusters_IDW}.

\begin{figure}
\myfloatalign
\includegraphics[width=0.65\textwidth]{Images/AreaExponents_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.2\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{Min EE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Cluster points} & $40$
& $118$ & $40$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Mean and std dev} & $-0.8221 \pm
0.04182$ & $-0.9061 \pm 0.1979$ & $-0.6961 \pm 0.1050$ \\
\bottomrule
\end{tabularx}

\caption[Statistical distribution of the exponents of the
probability distribution of contributing area for the clusters of
experiment \myExpThreeNoSpace]{Statistical distribution of the
exponents of the probability distribution of contributing area
for the clusters of experiment \myExpThreeNoSpace. For more
detailed informations about the image, refer to
\myFigNoSpace{fig:HackClusters_IDW}.}
\label{fig:AreaExpClusters_IDW}
\end{figure}

\chapter{The Unsung Heroes}
We believe that engineering is made by tools, whatever they are.
Therefore we present here the tools we have used, which are mainly
software tools.

\section{Results production}
The first step to product results was to code the model software.
We used the GNU compiler collection for \ac{C++}. As editor, we
choose \emph{Eclipse} because it is open source and because of its
plugin structure that allows great extendibility. Is available at
\url{http://www.eclipse.org/}.

To maintain updated the code sources written by us, we needed a
code repository. We chose the service from \emph{Google Code}
(\url{code.google.com}) managed by the \emph{Subversion} system.
Therefore we used also the \emph{Subversive} plugin to integrate
these functionalities within \emph{Eclipse}.

After writing the code, we had to perform the experiments. We used
the cluster system Pennsylvania University kindly gave us access
to. Control on experiments was based on automatization scripts
written in the language of the \emph{bash} textual shell. The
scripts automatized the job submission to the cluster computers,
the formatting of results file and the launching of the \ac{MOEA}
framework \cite{moeaframework:2013} command line utilities to
analyze these data under the perspective of \acp{MOEA}.

The images of the Pareto fronts were produced thanks to
\emph{AeroVis} which is a visualization software
specialized for Pareto fronts. \emph{AeroVis} is currently being
site licensed by the Aerospace Corp. through Penn State’s
Intellectual Property office.

The statistical analysis of Horton's indexes was performed in
\emph{Matlab}\copyright{} which is an high level language and an
interactive environment for numeric calculus. It provides many
ready-to-use scientific techniques, like clustering by $k$-means.

Finally we used \emph{Dropbox} (\url{www.dropbox.com}) for
clouding and hosting services.

\section{Thesis writing}
We began the writing of this thesis at the start of March 2013. By
that time, we had already imagined most of contents of the thesis
and we started the execution of the experiment here shown. We also
organized the scientific literature we have read which served as
the basis for the first chapters. The last content which was
defined was the conclusion because it is based on experiments
results.

\subsection{Technical side}
We were asked to use the \LaTeX\ system, and we would have chosen
it anyway.
As the website \url{http://latex-project.org/} states, \blockquote{
\LaTeX\ is a document preparation system for high-quality
typesetting. It is most often used for medium-to-large technical
or scientific documents but it can be used for almost any form of
publishing. \LaTeX\ is not a word processor! Instead, \LaTeX\
encourages authors not to worry too much about the appearance of
their documents but to concentrate on getting the right content.
[It] is based on the idea that it is better to leave document
design to document designers, and to let authors get on with
writing documents. \LaTeX\ is based on Donald E. Knuth's \TeX\
typesetting language or certain extensions. \LaTeX\ was first
developed in 1985 by Leslie Lamport, and is now being maintained
and developed by the \LaTeX3 Project.}

We can confirm every single word. Indeed, we let the design of the
appearance to \enquote{document designers}. Particularly we chose
the style \enquote{Classic Thesis} available at
\url{http://code.google.com/p/classicthesis/}. It's \enquote{an
homage to the elements of typographic style} and is inspired by
the work of \citeauthor{bringhurst:2002} \citetitle{bringhurst:2002}
\cite{bringhurst:2002}.

A little workflow research on the Internet resources was done to
find a simple yet powerful bibliography management system. We
ended up choosing \emph{Zotero} because of its easy connection
with most of articles database and with \emph{bibtex}. As the
website states at \url{http://www.zotero.org/}, \blockquote{Zotero
[zoh-TAIR-oh] is a free, easy-to-use tool to help you collect,
organize, cite, and share your research sources. It lives right
where you do your work --- in the web browser itself}. It has also
the capability to retrieve the article itself and to link the file
with its database, which effectively become our library.

The word \emph{BibTeX} stands for a tool and a file format
which are used to describe and process lists of references, mostly
in conjunction with \LaTeX documents. It is supported by
\emph{Zotero} as well as Google Scholar, Web Of Science and many
other research related resources.

Last but not least, we rely on the plugin structure of
\emph{Eclipse} to use it as a \LaTeX\ editor, thanks to the \TeX
lipse plugin available at \url{http://texlipse.sourceforge.net/}.
With the \emph{pdf4Eclipse} plugin we were also able to see the
changes in the document appearance each time we saved the \TeX\
files, thanks to the automatic compilation that can be triggered
in \emph{Eclipse}. We could also rely on the \emph{Google Code}
repository to take care of merging the chapters written by both
of us with no fear of losses.
