\chapter{Multi-objective framework: implementation and technicalities}
\label{chap:instanceOfFramework}
 
\begin{flushright}{\slshape    
   For the things we have to learn before we can do them, \\
   we learn by doing them.} \\ \medskip
    --- \citeauthor{aristotele}, \citetitle{aristotele}, \citeyear{aristotele} 
\end{flushright} 

The previous chapters introduced many questions and the
uncertainty present in the field of landscape evolution. In order to
proceed on testing our idea, we had to make a number of
assumptions, which were described in \myChapNoSpace{chap:testingOptimality}.
At the end of it, the choice of the hypotheses and the thesis to
test has been included into a framework. This allows us to
rationalize the scientific process of testing hypotheses by
explicitly express them. We hope also that the framework would be
a valid tool to communicate the methodology here applied in a
common language, so that the results we reach can be useful to
other researchers.
\MyFig{fig:flowchart} shows the operational part of the
framework, using the common language of flowcharts. This 
chapter starts from the inner part of the model, moving towards
the final analysis of results, showing the technicalities and methodologies
adopted. 
For this reason, the order of the topics as written in the chapter 
may sometimes be different from how they appear in the flowchart.
We will refer to the flowchart at the beginning of each section, 
in order to avoid that the reader gets lost.

\begin{figure}[p]
\myfloatalign
\includegraphics[width=0.9\columnwidth]{Images/flowchart.pdf}  
\caption[Operational flowchart that specify the
framework]{Operational flowchart that specifies the framework. The
upper part (\textit{a}) represents the operation performed. The
parallelograms represent interactive steps where decisions by the
user are required. The rectangles indicate generic processing
steps: the ones with lateral stripes are complex processes
composed by sub-processes that are expanded in parts \textit{b},
\textit{c} and \textit{d}. The rhombuses symbolize conditional
operations and usually implement iterative steps. }
\label{fig:flowchart}
\end{figure}

\section{The digital elevation model}
\label{sec:theDEM}
This section \graffito{In our experiments, \acp{DEM} of $51 \times 51$ 
cells were used (each cell having an area of $1\,km^2$).}
refers mostly to the model setup part of the
flowchart in \myFig{fig:flowchart}.

The use of \acl{DEM}s (\acs{DEM}s) has significantly changed the way
of studying Earth's surface processes. The increasing availability
of data in this format and the computational power of computers
allows researchers to perform their analyses with a wider and
deeper perspective. Therefore, the use of \acp{DEM} when studying
landscape evolution is convenient. Many landscape evolution models
relies on this data structure to represent surfaces
\cite{rigon:1993}.

An example of \ac{DEM} is provided in the left side of 
\myFigNoSpace{fig:realDEM}.

Also the work by \citeauthor{paik:2011} \cite{paik:2011}, 
we started this one from, is based on \acp{DEM}. In his
paper, he uses a square domain of $201 \times 201$ cells, each
grid cell having an area of $1\ \text{km}^2$. In this work, however,
the experiments conducted on \acp{DEM} of such a dimension
requires too many computational resources and becomes too
expensive when many experiments with different setups have to be
conducted. Therefore, we choose to reduce the dimension by a ratio
of $4$, performing our simulations on square \acp{DEM} of $51
\times 51$ cells. Many hydrological and geomorphological
researches are based on \acp{DEM} of these dimensions:
\citeauthor{horton_erosional:1945}'s original work that lead
him to formulate the laws of stream composition was based on basins
of area between $15.8$ Sq. Mi. and $479$ Sq. Mi., which roughly
corresponds to ones generated by our model. Nonetheless, the
effect of the scale is something that should be studied in future
research on this topic and the analysis on larger landscapes has
to be performed.

The planar shape of the \ac{DEM} can influence the whole landscape
and the hydrological networks grown on it, as found by
\citeauthor{rigon:1993} \cite{rigon:1993} in their studies
about \ac{OCN}. They studied triangular domains and 
hydrological networks grown according to the \ac{TEE} criterion,
finding that a minimum angle of the planar projection of the
domain is required by basins to develop without imposed boundary 
constraints. Similar experiments with different \ac{DEM} shapes
should be performed with the model here presented, but again we
leave this topic to future research.

The resolution of \acp{DEM} here used is $1\ \text{km}^2$, as in
\citeauthor{paik:2011}'s work. Given the dimension of $51\times
51$ cells, the covered area is $2601\ \text{km}^2$. The resolution
can seem too coarse, but it allows to simulate a surface large
enough to observe river dynamics on landscape evolution while
keeping a reasonable computational cost.

\subsection{Boundary conditions}
Boundary conditions \graffito{The \ac{DEM} boundary elevations are 
at the sea level, in our experiments.}
for the model and the \ac{DEM} can deeply
affect the simulations results. We used here the same boundary
condition as in \citeauthor{paik:2011}, \ie we imposed the
contour cells to have an elevation equal to zero. This elevation
represents the sea level, therefore, the simulated \ac{DEM}s are
actually islands. It has been suggested that they can represent
also other types of landscapes, \eg the top of a mountain, by
translating each cell of a suitable elevation
value.\footnote{We thank \myOtherOtherSupervisor for the
suggestion.}

To have an idea, the French island of Réunion, located in the Indian
Ocean (East of Madagascar) has a surface equal to $2\ 512\ \text{km}^2$,
about the same as our model \ac{DEM}s. Digital elevation data of the
island, collected from \ac{NASA}, are shown in
\myFigNoSpace{fig:realDEM}.

\begin{figure}
\myfloatalign
\includegraphics[width=\columnwidth]{Images/reunion.pdf}  
\caption[Reunion island elevation data]{On the left, the \ac{SRTM}
v4 digital elevation data from \acs{NASA} of Réunion island,
France, Indian Ocean. Data have been elaborated by the authors. On
the right, the same island seen in Google Earth \copyright, from
Terrametrics images, 2013.}
\label{fig:realDEM}
\end{figure}

\subsection{Initial conditions}
For the first two experiments described in
\myChap{chap:simAndFindings}, it is necessary to define an initial
condition \ie an initial shape of the landscape surface.
\citeauthor{paik:2011} used a pyramid shaped \ac{DEM} to
\blockcquote[see][p. 689]{paik:2011}{maximize the terrestrial
area (effective domain)}. The pyramid is discr, the slope is
$\frac{4}{1000} = 0.004$.

We use the same configuration in the smaller domain of $51
\times 51$ cells (or square kilometers). The same slope used by
\citeauthor{paik:2011} makes the top of the pyramid have an elevation of$100$
meters above sea level. An example of the resulting initial \ac{DEM} is given in
\myFig{fig:pyramid51x51}. As other parameters, also these settings
should be changed to study how they can affect the optimal
landscape, but again, these experiments are left to future
researches.

\begin{figure}
\myfloatalign
\includegraphics[width=\columnwidth]{Images/DEM_51x51_Paik.pdf}  
\caption[\ac{DEM} initial condition for experiments here
performed]{\ac{DEM} initial condition for experiments here
performed: the base of the pyramid is a square of $51
\times 51$ cells (or square kilometers), covering a surface of $2401$
square kilometers, with the same number of cells.}
\label{fig:pyramid51x51}
\end{figure}

\subsection{Spatial interpolator}
\label{subs:spatialInterpolator}
The amount of cells in the \ac{DEM} can be noticed in
\myFigNoSpace{fig:pyramid51x51}. For each of them, the
optimization algorithm has to evaluate function $f(\cdot)$,
 \ie the elevation value.

As highlined in \mySubsec{subs:modelInputs}, the great number of
values can ``puzzle'' the optimization algorithm and make it
work more as a random search procedure, losing part of the
connection to optimality principles and guidance from them. 
In order to avoid this possible failure mode, we reduced the
number of elevation data required to the optimizer. Then, a spatial
interpolator is used to recreate a surface that actually becomes
the landscape. In other words, we ask to the optimizer a sparser
set of sample values of function $f(\cdot)$ with a resolution
inferior to $1\ \text{km}^2$.

\subsubsection{Spatial interpolation}
In numerical analysis, multivariate interpolation or spatial
interpolation refers to the interpolation on functions depending on more
than one variable. The function to be interpolated is known at
given points $(x_i, y_i, z_i, \ldots)$ and the interpolation
problem consists of yielding values at arbitrary points
$(x,y,z,\ldots)$. Many interpolation methods exist: they
differ in terms of precision and complexity of the estimated
surface.

\paragraph{Inverse Distance Weighting}
\acf{IDW} is a type of deterministic method for multivariate
interpolation with a known scattered set of points. The values assigned
to unknown points are calculated with a weighted average of
the values available at known points. The applied weight is an
inverse function of the distance between the point to be
calculated and each of the known points, from which the name of
the methodology. The method is simple to implement and execute
and does not require many parameters setup. It is therefore
suitable to be used in this framework.

A longer explanation of this method and a snipped code of the
algorithm implemented by the authors is given in
\myAppendixNoSpace{appChap:IDW}. By using the interpolation, it was
possible to reduce the number of input variables required
to the optimizer: from the original $2401$ cells belonging to
the $51 \times 51$ \ac{DEM} to $16$ over the same
surface.\footnote{The contour is excluded because it is not modifiable,
given the presence of the boundary condition.}

\section{DEM elevations sum constraint}
\label{sec:sumOfElConstraint}
Before concluding \graffito{In our experiments, the sum of \ac{DEM} 
elevations is conserved along the process, keeping its variation 
within a tolerance.}the description of the model setup, \ie  section 
\textit{(b)} in \myFigNoSpace{fig:flowchart}, with the description of depression 
filling and flow routing algorithms, one last important feature related to 
the \ac{DEM} must be described: the \ac{DEM} elevations sum constraint. 
The model by \citeauthor{paik:2011} features a constraint called
\enquote{tectonic condition} based on the hypothesis that the
mass gained because of uplift is the same as the total loss of sediment
mass from the whole landscape. The basis of this hypothesis is
\blockcquote[see][p. 687]{paik:2011}{that there can be a balance
between erosion and uplift as an increase in tectonic uplift rate
can lead to higher relief and then this again will result in a
higher erosion rate}. He cites observations from Taiwan, Southern
Alps of New Zealand and Swiss Central Alps to support this
statement \cite{paik:2011}.

This condition can be also derived from
\myEqNoSPace{eq:massConservation}: the rate $b_t$ represents river
dynamics, while $U(x,y)$ represents tectonic uplift. This
requirement also means that the sum of elevations is still the
same during the optimization process. Therefore, the total amount
of potential energy available to the water flow on the
surface remains the same and the resulting landscapes can be
compared without the need to take into account different energy
levels.

This constraint is enforced by checking the mass variation between
the initial condition and the optimal landscape. Actually, mass is
not the proper term: thanks to the hypothesis of isotropy of the
landscape, mass is proportional to the sum of the elevations
in the landscape \ie the sum of the values contained in each \ac{DEM} 
cell.\footnote{Mass constraint is verified after
the depression filling phase, which is explained in
\mySubsec{subs:DF}.}

\subsection{Feasibility}
This constraint has proved to be very challenging:
\myFig{fig:massConstraintFeasibility} shows the probability of
randomly choose a landscape that fulfill the constraint. The
figure is based on the evaluation of such a probability, given a discrete
set of variation in the elevation values, which can be applied to
each cell of the \ac{DEM}, during the whole optimization. The whole
dissertation and the algorithm used for the calculus are shown in
\myAppendixNoSpace{appChap:constraint}.

Without any tolerance on mass variation (\ie the mass of the final \ac{DEM} is 
exactly the same of the initial one), the probability of randomly choose a
feasible control is less than $0.1$ for a $51 \times 51$
\ac{DEM}.

\begin{figure}
\myfloatalign
\includegraphics[width=\columnwidth]{Images/feasibilityNR51.pdf}  
\caption[Theoretical probability of randomly choose a surface
respecting the \ac{DEM} elevations sum constraint]{Theoretical
probability of randomly choose a surface respecting the \ac{DEM}
elevations sum constraint: the blue area represents low
probabilities, the red higher ones. On the $y$-axis there is the
tolerance on the accepted mass variation. On the $x$-axis there is
the range of possible variations in meters, \eg $5$ means that
each cell can vary from its initial condition of $\pm 5$ meters.
\acs{DEM} dimensions are $51 \times 51$.}
\label{fig:massConstraintFeasibility}
\end{figure}

\subsection{Tolerance and application}
\MyFig{fig:massConstraintFeasibility} demonstrates how difficult
is to randomly select a landscape that fulfill the mass
constraint with growing \ac{DEM} dimension. As in
\citeauthor{paik:2011}'s paper \cite{paik:2011}, we add a
tolerance value to the mass constrain, \ie we accepted small
changes in the total elevations sum.
Even a value of tolerance as little as $0.001$ is able to increase
the probability to more than $80$\%.

It must be underlined that the \acf{DF} phase can also alter the
elevations in the \ac{DEM}. The mass constrain is checked after
this phase. The effect of this change can be assessed empirically
with the following data. On the experiment \myExpOne, which will be
explained later, according to
\myFigNoSpace{fig:massConstraintFeasibility} with maximum absolute
change of $9$ meters and tolerance level of $0.001$, a random
search should have a feasibility probability between 30\% and
40\%. The random search performed had instead a feasibility value
of $3.40$\% on $30$ millions of different \acp{DEM} tried.

That said, it is true that \acp{MOEA} can manage constraints and
tackle them more efficiently than a merely random sampling. For
example, the first experiment (see \myExpOne and \myExpTwo) which
matches exactly the situation analyzed in the figure, should have
a percentage of landscapes fulfilling the constraint by 30\%. With
the effect of \ac{DF}, we measured 3\%. \acp{MOEA} instead showed a
percentage of approximately 60\%. They demonstrates the ability of
\acp{MOEA} to efficiently understand the problem.

\subsection{Spatial interpolator and mass constraint}
In the experiments shown in \mySecNoSpace{sec:expIDW16}, we use the
interpolator presented in \mySubsecNoSpace{subs:spatialInterpolator}. 
Mass constraint is still verified, in the sense that the mass of
optimal landscapes does not differ more than the tolerance with
respect to the initial condition. However, the possibility to
explore a larger number of landscape surfaces extends the controls range. 
Therefore, the probability of randomly
select a correct landscape is even lowered (with the same tolerance
value). We did not evaluate it with a probability calculus, but the
experiments with \acp{MOEA} showed a percentage of correct
landscapes among the tried ones between $0.4$\% and $20$\%. The
random sampling of $400\,000$ different values for each of the $16$
input values required by \ac{IDW}, which created a set of
$13\,600\,000$ different \acp{DEM}, was not able to find a single
landscape respecting the mass constraint.

\section{The extraction of hydrological networks}
\label{sec:extractionHydroNet}

\begin{figure}
\myfloatalign
\includegraphics[width=\columnwidth]{Images/flowchart_model.pdf}  
\caption[Operational flowchart of steps performed by the model
developed for this thesis]{Operational flowchart of steps
performed by the model developed for this thesis: the
parallelograms represents the input and output operations;
rectangles are generic processing steps; rhombuses are
conditional forks in the work flow; ellipses indicate the
start and the end of flowchart.}
\label{fig:modelFlowchart}
\end{figure}

The core part of the work-flow shown in \myFigNoSpace{fig:flowchart},
\ie what is required to \enquote{solve the optimization problem} 
(sector \textit{(c)} in \myFigNoSpace{fig:flowchart}),
is the process to find the solution of
\myEq{eq:riverDefinitionMultiObj}. Hidden inside it, all the
operations that connect a given landscape with the values of its
optimality criteria are present. Some characteristics of this procedure have
already been explained in the previous \mySec{sec:theDEM} and
\mySecNoSpace{sec:sumOfElConstraint}. The operational part of the
\ac{DEM} analyzing procedure is going to be explained in the
following paragraphs. An overview of this procedure is given in
\myFigNoSpace{fig:modelFlowchart}.

The implementation of the following algorithms has been made by
the authors of this thesis in \ac{C++} software, which is detailed
in \myAppendixNoSpace{appChap:theModel}.

\subsection{Depression filling}
\label{subs:DF}
As explained \graffito{\citeauthor{planchon_fast:2002}'s \ac{DF} 
algorithm was used} 
in \mySecNoSpace{sec:theModel}, depression filling
operation is performed in order to remove pits and allows each
cell to be connected with an outlet. The algorithm which was
chosen and implemented for carrying out this passage is described
and then compared with the one used by \citeauthor{paik:2011} in
his \ac{GLE} model, in the next paragraphs.

\subsubsection{\ac{DF}: the algorithm}
The algorithm used in the model proposed within this thesis is the
extended version of the one conceived by
\citeauthor{planchon_fast:2002} in \cite{planchon_fast:2002}.
In few words, it \blockcquote[see][p.
159]{planchon_fast:2002}{first adds a thick layer of water over
all the \ac{DEM} and then drains excess water}, so that all pits
result filled.
Their algorithm is now illustrated:\footnote{Here the description
is summarized; for a more complete one see \cite{planchon_fast:2002}.}
\begin{description}
\item[Initialization]
In this initial phase, the following elements are defined:
\begin{itemize}
  \item $Z_\text{transient}$: it is a transient elevation matrix.
  It has the same dimensions as the initial \ac{DEM} ones and the
  elevation values of all its cells, apart from the ones on the
  border, are set to a very high number.
  In our \acs{C++} model, considering that elevation cells assume
  integer values, they were set equal to the value
  $INT\_ MAX=2\,147\,483\,647$, which is the max
  value an integer number can assume in \acs{C++}.
  As for the border cells, they are set equal to the
  $Z_\text{DEM}$, being $Z_\text{DEM}$ a matrix containing the
  depressed \ac{DEM} elevations;
  \item $\varepsilon$: it is called minimum positive elevation
  distance required.
  It is the positive elevation difference which a cell is required
  to have with respect to the lowest of its neighbouring cells, in
  order not to be considered depressed or produce flat areas.
  In our model, the value chosen for $\varepsilon$ is $1$ cm. 
\end{itemize}
\item[Depression filling] 
This is the core of the algorithm. 
The operations executed after the initialization phase are
essentially two:
\begin{itemize}
  \item the value of $Z_\text{transient}$ in the positions related
  to the cells not depressed in the initial \ac{DEM} are lowered
  and set equal to the values in the \ac{DEM}:
  \begin{equation}
  Z_\text{transient}(x,y) = DEM(x,y) \qquad \forall(x,y)\text{
  not depressed}
  \label{eq:notDepressedCells}
  \end{equation}
  \item correction of depressions: the value of
  $Z_\text{transient}$ in the positions related to the cells
  depressed in the initial \ac{DEM} are set equal to the elevation
  of their lowest neighbour, $(x_n, y_n)$, plus the minimum
  positive elevation $\varepsilon$, as shown in equation
  \myEq{eq:depressedCells}.
    \begin{equation}
  Z_\text{transient}(x,y)=Z_\text{transient}(x_{n},y_{n})+\varepsilon 
  \label{eq:depressedCells}
  \end{equation}
\end{itemize}
In particular, the two mentioned operations are performed by
exploring the \ac{DEM} and $Z_\text{transient}$ iteratively:
\begin{enumerate}
  \item first of all, starting from each cell of the border and
  going upwards iteratively, values of $Z_\text{transient}$ for
  not depressed cells are corrected according to
  \myEq{eq:notDepressedCells};
  \item secondly, the \ac{DEM} and $Z_\text{transient}$  are
  explored varying the direction of scrolling and
  $Z_\text{transient}$ values are changed, both for depressed and
  not depressed cells, according respectively to equations
  \myEq{eq:depressedCells} and \myEq{eq:notDepressedCells}.
\end{enumerate}
\item[Termination]
At this point, matrix $Z_\text{transient}$ is equal to the initial
\ac{DEM}, where it was not depressed, and corrected, where it was
depressed.
Therefore, the algorithm ends and \ac{DEM} elevations are
substituted with $Z_\text{transient}$ ones.
\end{description}

\subsubsection{A comparison with \citeauthor{paik:2011}'s \ac{DF} algorithm}
A comparison between the \ac{DF} algorithm used in our model
and the one used in \citeauthor{paik:2011}'s \ac{GLE} model is
needed, in order to explain the reason for having used another
algorithm.

Paik's \ac{DF}, as it is described in \cite{paik:2011}, works as
follows:
\begin{enumerate}
  \item each depressed cell is compared with the lowest one among
  its neighbouring cels;
  \item the elevation difference between the two is evaluated;
  \item finally, \blockcquote[see][p. 688]{paik:2011}{the depression cell
  is lifted by half of the gap and and the neighbouring cell is
  lowered by the same amount}.
\end{enumerate}
According to \citeauthor{paik:2011}'s \ac{DF} algorithm, not only depressed
cells are filled, but also the elevations of
the neighbouring cell is modified, in order to grant the
conservation of the total elevations sum.
A comparison between the behaviour of the two \ac{DF} algorithms is
provided in \myFig{fig:depressionFilling}.
\begin{figure} 
\myfloatalign
\includegraphics[width=\columnwidth]{Images/depressionFilling.pdf}
\caption[Depression filling techniques comparison]{Depression
filling techniques comparison. In the upper part of the figure,
\citeauthor{paik:2011}'s algorithm is shown\cite{paik:2011}.
Since $\frac{\Delta}{2}+1$ is added to the depressed cell and
$\frac{\Delta}{2}$ is subctracted from its lowest neighbour,
elevations sum is conserved apart from $1$ cm.
In the lower part of the figure, \citeauthor{planchon_fast:2002}'s
algorithm is shown\cite{planchon_fast:2002}.
Since the only operation is adding $\Delta +\varepsilon$ to the
depressed cell, being $\varepsilon$ the minimum positive elevation
distance, elevations sum is not conserved.}
\label{fig:depressionFilling}
\end{figure} 

The main reason for having chosen an alternative algorithm with
respect to \citeauthor{paik:2011}'s one is that, as emerged from
the testing and debugging activity performed on our model, 
\citeauthor{paik:2011}'s \ac{DF}
works fine as long as there are single cell depressions, but it
may enter an endless loop when there are extended depressions, \ie
depressions including two ore more cells.
In this latter case, in fact, elevation gap is continuously moved
to and fro the depressed cell, without solving the depression and
reaching a termination condition.
It might be argued that, even if the algorithm we used is more
stable, it does not respect the elevations sum constraint detailed
in \mySubsecNoSpace{sec:sumOfElConstraint}.
In response to that, it is possible to say that, even if the
\ac{DF} algorithm by itself does not provide any elevations
constraints, it is placed in the model in such a position that the
fulfillment of the constraint is ensured.
In fact, it is performed after the controls (\ie the value of
function $f(\cdot)$ in every point $(x,y,z)$ of the \ac{DEM}) are
applied, before verifying the elevations sum constraint.
Doing so, \acp{DEM} corrected with \ac{DF} are submitted to the
constraint and it is guaranteed that only control masks which
respect it are accepted and applied to the \ac{DEM}, being
otherwise rejected.

\subsection{Flow routing extraction: \ac{GD8}}
The problem \graffito{\citeauthor{paik_global:2008}'s \ac{GD8} 
algorithm was used for flow routing extraction} 
of performing a good flow direction extraction is
illustrated in \cite{paik_global:2008}.
In our model, the improved version of \ac{D8} called \ac{GD8} and
proposed by \citeauthor{paik_global:2008} in
\cite{paik_global:2008} is implemented, as already written in
\myChapNoSpace{chap:testingOptimality}.

The reason for the choice is double: first of all, as the author
affirms as a concluding remark, based on the application of his
method, \blockcquote[see][p. 9]{paik_global:2008}{the proposed
algorithm successfully reduces the uncertainty residing in local
searches and produces more natural flow patterns, while it is
still simple, computationally efficient, and easy to use}.
Secondly, it was chosen in order to implement a model coherent
with \citeauthor{paik_global:2008}'s \ac{GLE} model and be able
to consistently compare the output networks of the two models.
The algorithm is described and commented by comparing it with
simple \ac{D8} algorithm in the following paragraph.

\subsubsection{\ac{GD8}: the algorithm}
The algorithm, implemented according to its features described in
\cite{paik_global:2008}, is executed with this work-flow:
\begin{description}
  \item [Initialization:] the primary and secondary directions are
  defined for each cell of the \ac{DEM}.
  \begin{itemize}
    \item The \myEmph{primary direction} is defined according to
    the concept of \ac{LSD}. Given the fact that each \ac{DEM}
    cell is surrounded by other eight cells, the slopes between
    the central cell and each of its eight neighbours are compared
    and the direction following the maximum slope is assigned as
    primary direction.
  	\item The \myEmph{secondary direction} is
  	\blockcquote{paik_global:2008}{the steeper direction between
  	clockwise and counterclockwise directions adjacent to the
  	\ac{LSD}}.
  \end{itemize}
  \item[Direction choice:] this is the core phase of the
  algorithm, consisting in choosing whether to assign the primary
  or the secondary direction.
  The choice is performed by comparing the cell itself with its
  upstream neighbour, on the basis of three criteria:
  \begin{enumerate}
    \item the primary direction of the cell and its upstream
    neighbour is the same;
    \item the secondary direction of the cell and its upstream
    neighbour is the same;
    \item the gradient between the source of the flow passing onto
    the cell and the downstream neighbour defined by the secondary
    direction is higher than the gradient between the same source
    and the downstream neighbour defined by the primary direction.
  \end{enumerate}
  If the three of these criteria are satisfied, secondary
  direction is chosen; in the opposite case, the primary one is
  assigned.
  \item[Termination:] when all \ac{DEM} cells have been assigned a
  flow direction, the algorithm stops.
\end{description}

\subsubsection{Short comparison between \ac{D8} and \ac{GD8}}
The improvement provided by \ac{GD8} algorithm in comparison with
the \ac{D8} one is significant.
In fact, for non-dispersive methods \ie the ones which set only one
flow direction to each cell, like \ac{D8} and \ac{GD8}, there is an
uncertainty when the direction is chosen starting from the central
cell to one of its neighbouring cells.
The reason for this uncertainty is that among all the directions
that the flow might assume in a real condition, discretization
allows only the eight directions related to the eight neighbouring
cells.
As it is easy to understand, this uncertainty is equal to
$\frac{2\pi}{8}=\frac{\pi}{4}$, being $8$ the neighbouring cells
for each cell of the \ac{DEM}.
Given this matter of fact, the difference among the two algorithms
is the following:
\begin{itemize}
  \item \ac{D8} algorithm does not include any correction option,
  since it considers only the local gradients, \ie the gradients
  between a cell and its neighbours and it only chooses the
  direction as the \ac{LSD}.
  Therefore, if a direction different from the real one is
  undertaken, then the error is cumulated along the flow and the
  simulated directions might be very distant from the real ones;
  \item \ac{GD8} algorithm presents two options of direction, \ie
  the primary and the secondary directions and it considers not
  only local slopes, but also slopes given by
  \blockcquote{paik_global:2008}{cells up to the highest-order
  neighbors}, being a global search. Therefore, it is able to
  reduce the cumulative error by evaluating global slope
  conditions and, on the basis of that, choosing the direction
  that follows the maximum global slope, between the two options
  it has.
\end{itemize}

\subsection{\ac{TEE}, \ac{EEL}, \ac{EE}, \ac{EEE} in the model}
As it is shown in \myFigNoSpace{fig:modelFlowchart}\graffito{
The formulation of the criteria must be rewritten, according to 
\ac{DEM} cell discretization}, the last operation 
performed by the model, after having exctracted the river network, 
is evaluating the values of the chosen criteria. 
In \mySubsec{subs:optimalityPrinciples} it was explained that 
the four objectives to test
with our framework are \acf{TEE}, \acf{EEL}, \acf{EE} and
\acf{EEE}.
The definition previously provided for the four objectives was
referred to a river network considered as composed by links and
knots.
In our model, since the basis is a \ac{DEM} discretized with a
certain number of cells, the objectives formulation needs to be
translated according to this discretization, so that the river
networks elementary components are not anymore links and knots,
but cells.
According to this assumption, the new formulation of the four
objectives for a \ac{DEM} having a number of rows of cells equal
to $N_\text{rows}$ and a number of columns equal to
$N_\text{columns}$ is:
\begin{description}
\item[Minimum total energy expenditure:]
%\begin{empheq}[box=\mygraybox]{align*}
\begin{equation}
\highlight{
\text{TEE} = \min\left( \sum_{i=1}^{N_\text{rows}} \left(
\sum_{j=1}^{N_\text{columns}} {Q_{i,j}^{0.5}\,L_{i,j}} \right)
\right) }
\label{eq:TEEcell}
\end{equation}
%\end{empheq}

\item[Minimum energy expenditure in any link:]
\begin{equation}
\highlight{ \text{EEL} = \min \left( \var \left( \mathbf{Q^{0.5}L}
\right) \right)}
\label{eq:EELcell} 
\end{equation}  
where $\mathbf{Q^{0.5}L}$ is the matrix containing the product
${Q_{i,j}^{0.5}L_{i,j}}$ for every ($i$-th, $j$-th) cell, with:
\begin{itemize}
  \item  $i\in[1;N_\text{rows}]$ 
  \item  $j\in[1;N_\text{columns}]$
\end{itemize}

\item[Minimum energy expenditure per unit area:]
\begin{equation}
\highlight{ \text{EE} = \min \left( \sum_{i=1}^{N_\text{rows}}
\left( \sum_{j=1}^{N_\text{columns}} {Q_{i,j}^{0.5}S_{i,j}}
\right) \right) }
\label{eq:EEcell} 
\end{equation}  

\item[Equal energy expenditure per unit area:]
\begin{equation}
\highlight{ \text{EEE} = \min \left( \var \left( \mathbf{Q^{0.5}S}
\right) \right)}
\label{eq:EEEcell} 
\end{equation} 
where $\mathbf{Q^{0.5}S}$ is the matrix containing the product
${Q_{i,j}^{0.5}S_{i,j}}$ for every ($i$-th, $j$-th) cell, with:
\begin{itemize}
  \item  $i\in[1;N_\text{rows}]$ 
  \item  $j\in[1;N_\text{columns}]$
\end{itemize}
\end{description}

\subsubsection{Evaluating the objectives on a cell basis: is it
reasonable?}
A question which rises at this point, given the new definition of
the objectives based on a \ac{DEM} cell basis discretization is:
is this kind of discretization reasonable with respect to the
evaluation of the four objectives?

Three reasons can be advanced in order to support the
effectiveness of this discretization choice.
The first one is that the formulation of the objectives changes
according to the discretization, but their meaning does not: when
\citeauthor{rodriguez:1992} enunciated the \ac{TEE} in
\cite{rodriguez:1992}, in the way it is written in
\mySubsecNoSpace{subs:optimalityPrinciples}, it already was evaluated as
an aggregated objective of discretized elements, having as
elementary components the links of the network, which may vary
their length.
The evaluation of the objectives based on \ac{DEM} cells only
affects the aggregation as for the discretization technique, in
the sense that now a regular grid is imposed on the analyzed
network, but its features are not conditioned (the total length,
discharge and slopes are the same).
Therefore, the aggregation for objectives evaluation only changes
the scale of the disaggregated elements, but not their overall
aggregation.
As an example, if a stream link has length $L=10 \,m$ and constant
discharge $Q=25\,m^3/s$, its \ac{TEE} will be equal to $50
\sqrt{(m^3/s)}m$; hypothesizing that the link is split into two
cells, once a \ac{DEM} grid is superimposed on it, each cell will
have $L=5\,m$ and $Q=25\, m^3/s$, therefore its \ac{TEE} is
equivalent $\text{TEE} = 5 \times 5 + 5 \times 5 = 50
\sqrt{(m^3/s)}m$.
Of course, a proper \ac{DEM} scale should be chosen, in order to
perfectly maintain the same networks features and having exactly
the same values for the objectives.

Nevertheless, as \citeauthor{rigon:1993} prove through a
multiscale analysis of river networks, that fractality property of
river networks and subnetworks grants that \blockcquote[see][p.
1644]{rigon:1993}{the networks of different sizes exhibit
consistently the same statistic}.
Of course, changing the \ac{DEM} scale causes a variation in the
absolute values of the objectives, but the river network
properties and scaling laws are not affected, therefore the
objectives maintain their meaning and are suitable descriptors of
the considered networks.

Finally, other important scientific studies evaluating optimality
principles on \ac{DEM} cell basis have been previously carried
out: among them we cite \cite{rigon:1993} and \cite{paik:2011}. 

\subsubsection{Drained area threshold for objectives evaluation}
Another topic related to \ac{DEM} scaling and river networks
extraction is the one of drained area support, or drained area
threshold.
As asserted by \citeauthor{tarboton:1992}, there is a particular
drainage density value which should be set as minimum threshold
for river networks extraction from a \ac{DEM}, defined as
\blockcquote[see][p. 59]{tarboton:1992}{a point of transition
between the stable smoothing effects of diffusive processes at
small scale and the unstable effects of concentrative processes,
such as overland or open channel flow}.
In particular, it is shown in the same article that this value can
be found as the maximum of the function relating slopes values to
drained area values of a river network \cite{tarboton:1991},
\cite{tarboton:1992}.

This concept of drained area threshold was not considered for the
objectives evaluation in our model.
The reason is that, before introducing the spatial interpolator,
the overall system (composed by model and optimizer) was
structured requiring a control mask with a value for the
variation in elevation for each \ac{DEM} cell.
As a consequence, all the controls affected the objectives
values and, therefore, excluding the ones under the threshold from
objectives evaluation would have lead to a lack of information in
the controls-objectives correlation.

On the contrary, the concept of threshold is then considered
during the evaluation of river networks according to naturality
indexes, as the reader will notice while reading
\mySec{sec:analysisTools}.
In particular, it was set to $4$ cells, \ie $4\,km^2$, both for
naturality indexes evaluation and river networks visualization,
coherently with \citeauthor{paik:2011}'s \ac{GLE} model in
\cite{paik:2011}, \cite{paik_global:2008} and \cite{paik:2012}.

\section{Optimization algorithms}
\label{sec:optimizationAlgorithm}
In this section\graffito{\acs{eNSGAII}, \acs{GDE3} and \acs{OMOPSO}
 are the optimization algorithms used in our experiments, 
 apart from random search.}, the elements necessary to understand the phases described 
in box \textit{(c)} of \myFig{fig:flowchart} are explained.
As outlined in \mySecNoSpace{sec:optimization}, to search for the optimal
landscape, \ie the functions $f(\cdot)$ that minimize
\myEqNoSPace{eq:riverDefinitionMultiObj}, an evolutionary algorithm
approach is used. The following section shows the algorithms
used, their differences and history and the reason for their use.

These algorithms are common and already implemented. We used a
library that provides implementation together with managing
functions, the \citetitle{moeaframework:2013}
\cite{moeaframework:2013} created by
\citeauthor{moeaframework:2013}. As the website states,
\blockcquote[from][http://www.moeaframework.org/]{moeaframework:2013}{
The MOEA Framework is a free and open source Java library for
developing and experimenting with multiobjective evolutionary
algorithms (\acp*{MOEA}) and other general-purpose multiobjective
optimization algorithms. The \ac*{MOEA} Framework supports genetic
algorithms, differential evolution, particle swarm optimization,
genetic programming, grammatical evolution, and more. A number of
algorithms are provided out-of-the-box, including \ac*{NSGAII},
\ac*{eMOEA}, \ac*{GDE3} and \ac*{MOEAD}. In addition, the MOEA
Framework provides the tools necessary to rapidly design, develop,
execute and statistically test optimization algorithms.
}
The library is actively developed with five releases in the
last year, according to the website \cite{moeaframework:2013},
easy to understand thanks to the object oriented framework and simple to
extend with new problems. Documentation for its use is provided
on the website and on the blog of the research group that
mainly uses it \cite{waterprogramming:2013}.

Now, the algorithms used will be shown and
explained. The choice of them is based mostly on the
considerations in \cite{reed_evolutionary:2012} and the
availability of free versions. Additional information about
\ac{eNSGAII} and \ac{GDE3} can be found in
\myAppendixNoSpace{appChap:moeas}.

\subsection{eNSGAII}
\ac{eNSGAII} is a \ac{MOEA} built on the \ac{NSGAII}, with
the additional capabilities of $\varepsilon$-dominance archiving,
adaptive population sizing and automatic termination to minimize
the need for extensive parameter calibration
\cite{kollat_comparing:2006}. The primary goal of \ac{eNSGAII} is
to provide a highly reliable and efficient MOEA which minimizes
the need for parameterization \cite{kollat_comparing:2006}.

The parameters required to set up an optimization run of 
\ac{eNSGAII} are the initial population size, the maximum
\ac{NFE}, the injection rate into the archive and the parameters
related to simulated binary crossover and mutation operator.
Suggested possible values for these parameters are shown in
\myTabNoSpace{tab:MOEAandParameters}.

\begin{table}[tb]
\footnotesize
\myfloatalign
\begin{tabularx}{0.9\textwidth}{llrcl}
\toprule
\multicolumn{1}{c}{\normalsize{Algorithm}} &
\multicolumn{1}{c}{\normalsize{Parameter}} &
\multicolumn{3}{c}{\normalsize{Suggested Values}} \\
\midrule
\textbf{Any}
& \acl{NFE} 		& $10\,000 $ & $ \div $ & $ 200\,000$ \\
& Population Size 	&  $10 $ & $ \div 	$ & $ 1000$ \\
\midrule
\textbf{\ac{eNSGAII}} 
& Injection rate 				& $0.1 $ & $\div $ & $ 1.0$ \\
& \ac{SBX} rate 				& $0.0 $ & $\div $ & $ 1.0$ \\
& \ac{SBX} distribution index	& $0.0 $ & $\div $ & $ 500.0$ \\
& \ac{PM} rate 					& $0.0 $ & $\div $ & $ 1.0$ \\
& \ac{PM} distribution index 	& $0.0 $ & $\div $ & $ 500.0$ \\
\midrule
\textbf{\ac{GDE3}}
& \ac{DE} step size & $0.0 $ & $\div $ & $ 1.0$ \\
& Crossover rate 	& $0.0$ & $ \div $ & $ 1.0$ \\
\midrule
\textbf{\ac{OMOPSO}}
& Archive size 		 & $10$ & $ \div $  & $ 1000$\\
& Perturbation index & $0.0 $ & $\div $ & $ 1.0$ \\
\bottomrule
\end{tabularx}
\caption[Parameters needed for each \ac{MOEA} used]{Parameters
needed for each \ac{MOEA} used: if the algorithm requires the
parameter, a range is given as in \cite{reed_evolutionary:2012}.}
\label{tab:MOEAandParameters}
\end{table}

\subsection{GDE3}
As the name suggests, \acl{GDE3} \cite{kukkonen_fast:2006} is
the third improvement of the Generalized version of \acl{DE}
\ac{EA} \cite{storn_differential:2005}. It is a multiobjective
variant of the \ac{DE} algorithm. The \ac{GDE3} was one of the top
rated in a competition for \acp{MOEA} \cite{zhang_final:2009}.

Among \ac{GDE3} features, there is its
mutation operator, which uses the scaled \enquote{difference}
between two population members’ decision variable vectors to
generate new candidate solutions. This operator is called
rotationally invariant and it does not assume explicit search
directions when it creates new solutions. It also means that
\ac{GDE3} does not require decisions to be separable and
independent \ie, they can have conditional
dependencies.\footnote{\ac{SBX} operator used by \ac{eNSGAII}
assumes problems have independent decisions that can be optimized
using only vertical or horizontal translations of the decision
variables.}

Another interesting feature of \ac{GDE3} is its constraint
handling method: it reduces the number of needed function
evaluations, being more efficient in finding solutions for
constrained problem.

The parameters required to set up an optimization run of 
\ac{GDE3} are: initial population size, maximum \ac{NFE},
crossover rate and step size of \ac{DE} operator. With
only four parameters, \ac{GDE3} appears to be very suitable for
applications. As for \ac{eNSGAII}, suggested possible values for
these parameters are shown in \myTabNoSpace{tab:MOEAandParameters}.

\subsection{OMOPSO}
\acl{OMOPSO} (\citeauthor{sierra_OMOPSO:2005}
\cite{sierra_OMOPSO:2005}) is one of the most successful
multiobjective \acl{PSO} algorithms to date. It is notable for
being the first multiobjective \ac{PSO} algorithm to include
$\varepsilon$-dominance and a crowding-based selection mechanism
to identify the leaders to be removed when there are too many of
them. It is also highly competitive, as the authors found after
performing a comparative study with respect to three other
\ac{PSO}-based approaches and two \acp{MOEA} (the \ac{NSGAII} and
\ac{SPEA2}).

The parameters required to set up an optimization run of 
\ac{OMOPSO} are: initial population size, maximum \ac{NFE},
archive size and perturbation index. As for \ac{eNSGAII} and 
\ac{GDE3}, suggested possible values for these parameters are shown in
\myTab{tab:MOEAandParameters}.

\subsection{Random Search}
Among the algorithms included into the MOEA Framework, one is
named Random Search. At each search step, it generates a new
population with random sampled values of the input, evaluates it
and adds the newly Pareto dominant solutions to an archive. Of
course, this is neither an \ac{EA} nor an efficient way to solve a
\ac{MOP}.

However, it is included into the following analysis as a benchmark of
the ability of a proper \ac{MOEA} to perform better than a random
sampling of input values. In fact, this kind of search is not
based on the connection between input values and optimal principles.
Therefore, the comparison between the two procedure is an indirect
assessment of the strength of that connection and gives an hint for
understanding whether landscape evolution under river dynamics
follows an optimality principle\footnote{see
\mySec{sec:testingOptimality}.}.

\section{Multiple approximations of the Pareto front}
\label{sec:mergingParetoFront}
Again, this section is about the phases described 
in box \textit{(c)} of \myFigNoSpace{fig:flowchart}.
\MySubsec{subs:evolutionaryAlgo} introduced the choice of the use
of \acp{EA}, but also underlined two important drawbacks of those
algorithms.
The first is that when the solution is unknown, it is impossible
to know how far is the solution found by the \ac{MOEA} from the
real Pareto front.
The second drawback is that parameterization can greatly impact
the performance of an \ac{MOEA} \cite{reed_evolutionary:2012}
\cite{hadka_borg:2012} \cite{hadka_diagnostic_2012}.
There is also a third point that is worth to be mentioned: the
algorithms used here rely on random number generation to
initialize the first population or to go on with the optimization
process \cite{reed_evolutionary:2012}
\cite{hadka_diagnostic_2012}.
The first one is deeply related with the other two: poor
parameterization and unlucky random seed can prevent the \ac{MOEA}
to find good approximation of the Pareto front.

The proposed problem solving methodology tries to take care of
these problems by increasing the number of optimization runs for
each experiment. In other words, given a model setup and a choice
of the optimality principles to test, the optimization process is
repeated multiple times to overcome the dependency from the random
seed number generator. Given the computational burden, it is
repeated from five to ten times.\footnote{We will call experiment
a set of optimization runs with the same model setup and same
optimality criteria to be followed.}

As highlighted, the choice of the algorithm and the fine tuning of
its parameter can deeply affect the quality of Pareto front
produced. Lacking previous experiences with optimizations of this
kind of problems, more than one algorithm has been selected.
\ac{GDE3}, \ac{eNSGAII} and \ac{OMOPSO} are recognized to be top
performing algorithms in water resources management field,
which is the closer field of applications to the problem faced
here. The parameterization would have required a random sampling
of parameter values and multiple optimization with each of them.
Computational burden and time availability prevent us to
perform this operation: we therefore address this analysis to
further research and present the results with hand-tuned
parameters	, aware of the limitations that comes with this.

\subsection{Recent trend in \acp{MOEA}}
Using multiple optimization algorithms means that we applied
different selection, crossover and mutation operators to find the
Pareto front. In particular, the algorithms take advantage of
\acl{SBX} and \acl{DE} crossover operators, of \acl{PM} and
perturbation mutation operators and of different selection
strategies, like archive or re-injection.

This is the recent trend in the development of \acp{MOEA}. Given the
variety of fitness landscapes and the complexity of search
population dynamics, \citeauthor{vrugt_improved:2007}
\cite{vrugt_improved:2007} proposed to adapt the evolutionary
operators used during multiobjective search, based on their success
in guiding search. The multiobjective \ac{AMALGAM} \ac{MOEA}
\cite{vrugt_improved:2007} exploits multialgorithm search that
combines the \ac{NSGAII}, \ac{PSO}, \ac{DE}, and \ac{AM}. The new
Borg \ac{MOEA} \cite{hadka_borg:2012} features a multiple
recombinations operator to enhance the search in a wide assortment of
problem domains.

Another feature used by \ac{eNSGAII}, Borg and other \acp{MOEA} is
the $\varepsilon$-box dominance archive for maintaining
convergence and diversity throughout the search. It is also
used to compose the best approximation from different
\ac{MOEA} runs. It will be described in the next section.

\subsection{Epsilon box Pareto dominance}
\label{subs:epsilonDominance}
The $\varepsilon$-box dominance is a concept that helps \acp{MOEA}
to maintain any dominant solution they found during the optimization
process. The same concept is used to aggregate multiple Pareto
fronts which result from different optimization runs.

The problem that $\varepsilon$-box solves is called deterioration.
It occurs whenever the solution set discovered by an \ac{MOEA} at
time $i$ contains one or more solutions dominated by a solution
discovered at some earlier point in time $j$ < $i$, discarded
during the optimization. \citeauthor{laumanns_epsilon:2002}
\cite{laumanns_epsilon:2002} (\citeyear{laumanns_epsilon:2002})
effectively eliminate deterioration with the
$\varepsilon$-dominance archive and guarantee simultaneous
convergence and diversity in \acp{MOEA}.
\citeauthor{hadka_borg:2012} gives this definition of
$\varepsilon$-dominance archive:
\begin{definition}
For a given $\varepsilon > 0$, a vector $\mathbf{u} = (u_1, u_2,
\ldots, u_M )$ $\varepsilon$-dominates another vector $\mathbf{v}
= (v_1, v_2, \ldots, v_M )$ if and only if $\forall i \in \{1, 2,
\ldots, M \},\ u_i \leq v_i + \varepsilon$ and $\exists j \in \{1,
2, \ldots, M \},\ u_j < v_j + \varepsilon$.
\end{definition}

The $\varepsilon$-dominance provides also a minimum resolution
that bounds the archive size and allows to specify different
$\varepsilon$ values for each objective. Conceptually, the
$\varepsilon$-box dominance archive discretizes the objective space
into subspaces with side-length $\varepsilon$, called
$\varepsilon$-boxes. Therefore, the user of the algorithm is able
to define the desired resolution of the objective values.

This very important concept is used to compose the multiple Pareto fronts
resulting from the different executions of various algorithms.
The resulting Pareto front will therefore contain the
$\varepsilon$-dominant solution across all the solution points in
all the Pareto front produced for a given experiment.

% \subsubsection{Algorithm performance metrics}
% Once the best available Pareto front has been built it is possible
% to evaluate performance metrics to assess the ability of an
% algorithm with its parameterization to solve the problem.
% 
% description of metrics and numbers

\section{Output analysis tools}
\label{sec:analysisTools}
From this \graffito{The last operation conduced in our framework 
consists in evaluating the results according to naturality indexes.}
point till the end of the chapter, the tools implemented 
for performing the phases included in box \textit{(d)} of 
\myFig{fig:flowchart} are described.
Once the Pareto front is obtained according to the techniques
explained in \mySecNoSpace{sec:mergingParetoFront}, each of its
points can be visualized in the objectives space, \ie the
coordinates of each point are objectives values.
An example of Pareto Front is provided in \myFigNoSpace{fig:pfMogle}.
Moreover, each point of the front represents a landscape,
characterized by its elevation data and by the river network which
develops on it. The objectives values related to each point are
the values of \ac{TEE}, \ac{EE}, \ac{EEL}, and \ac{EEE} for the
landscape hidden behind the point itself.
Given that and the workflow described in
\myChapNoSpace{chap:testingOptimality} and \myFigNoSpace{fig:flowchart}, 
each point must be analyzed according to the naturality indexes 
defined in \mySecNoSpace{sec:natIndexes}:
\begin{itemize}
  \item Horton's ratios $R_B$, $R_L$, $R_L$ and $R_S$;
  \item Hack's law exponent;
  \item probability distribution of contributing area exponent;
\end{itemize}
In order to do that, some Matlab\copyright{} functions were
developed, as described in the following subsections.

\subsection{Matlab\copyright{} functions for evaluating Horton's
indexes}
Horton's indexes are evaluated, for each model output network, \ie
for each Pareto front point, using the base \ac{DEM}, the values
of the drained area and the flow directions for each of its cells.
The procedure implemented in Matlab\copyright{} is the following:
\begin{enumerate}
  \item springs are identified. More precisely, a minimum
  threshold equal to $4$ cells \ie $4\ \text{km}^2$ is set, in
  accordance to \citeauthor{paik:2011}'s work \cite{paik:2011} and
  springs are so identified as cells draining such an area;
  \item Strahler order is set to $1$ for the identified spring
  cells;
  \item flow directions are followed downstream and a Strahler
  order value is set for each cell of the \ac{DEM}, in the same
  way as shown in the left side of
  \myFigNoSpace{fig:hortonStrahler}.
  Of course, cells belonging to the same river branch will assume
  the same order.
  When two branches of orders $i$ and $j$ meet, the resulting
  branch order $k$ is evaluated as in \myEqNoSPace{eq:orderRule}:
  \begin{equation}
 	k = \max \left( i,j, \text{int} \left( 1 + \frac{i+j}{2} \right)
 	\right)
  \label{eq:orderRule}
  \end{equation}
  \item Horton's ratios \ie $R_B$, $R_L$, $R_L$ and $R_S$ are
  evaluated.
  In general, in order to simplify the analysis, and at the same
  time have a significant number of sample for analyzing the
  results, the mentioned ratios are evaluated only for the three
  networks of the \ac{DEM} having the biggest drained area.
    
  Moreover, $R_B$, $R_L$, $R_L$ and $R_S$ are evaluated as
  $10^\varepsilon$ where $\varepsilon$ is the average slope of the
  interpolating line of semilogarithmic plots of, respectively,
  number of streams for each order, average channel length,
  average channel drained area and average channel slope against
  order of stream, as shown in the example graphs in
  \myFigNoSpace{fig:hortonIndexesLog}. As for the interpolation, in the
  case of $R_B$ plot, it is imposed that the interpolating line
  passes through point $(\omega, 0)$, being $\omega$ the maximum
  order of the basin.
  The reason is that in a network there must be only one branch
  with maximum Strahler order, therefore $N(\omega)=1$ and
  $log(N(\omega))=0$.
\end{enumerate}

\begin{figure}
\myfloatalign
\includegraphics[width=\columnwidth]{Images/hortonIndexes.pdf}  
\caption[Horton Indexes semilogarithmic plots]{Horton indexes
semilogarithmic plots. $\omega$ is the stream order. In the
upper-left plot, N($\omega$) is the number of stream for each
stream order. In the upper-right plot, L($\omega$) is the aveage
channel length for each stream order. In the lower-left plot,
A($\omega$) is the average drained area for each stream order.
In the lower-right plot, S($\omega$) is the average slope for each
order. The blue line in each plot represents the linear
interpulation, whose slope is exponent $\varepsilon$ for
evaluating Horton ratios.}
\label{fig:hortonIndexesLog}
\end{figure}
 
\subsection{Matlab\copyright{} function for evaluating Hack's law
exponent}
Remembering that Hack's law states that $L\propto A^h$ as in
\myEqNoSPace{eq:HacksLaw}, the value of interest for evaluating
the naturality of a river network is $h$.
Moreover, since the law is valid for the longest stream from an
outlet to the divide, as asserted in
\cite{gray_interrelationships:1961} it is evaluated including only
the main channel of the considered basins (in our case the three
biggest basins of each landscape).
It is therefore estimated, for the main channel of each basin,
using this algorithm:
\begin{enumerate}
  \item the main channel is scrolled from the most upstream cell
  to the outlet, and the cumulative length $L_\text{mainChannel}$
  is iteratively evaluated for each of its cells;
  \item the cumulative drained area of each cell belonging to the
  main channel $DA_\text{mainChannel}$ is retrieved from the
  drained area matrix obtained as output of the model;
  \item Hack's law exponent $h$ is evaluated as the slope of the
  interpolating line in the logarithmic plot showing the relation
  between $DA_\text{mainChannel}$ and $L_\text{mainChannel}$ as
  the example presented in \myFig{fig:LvsAd}.
\end{enumerate}

\begin{figure}
\myfloatalign
\includegraphics[width=0.8\columnwidth]{Images/LvsAd.pdf}  
\caption[Hack's law logarithmic plot]{Hack's law logarithmic
plot. $DA_\text{mainChannel}$ represents the drained area of the
main channel of a network, at different distances
$L_\text{mainChannel}$ from its spring. The blue line represents
the linear interpulation, whose slope is Hack's law exponent $h$.}
\label{fig:LvsAd}
\end{figure}

\subsection{Matlab\copyright{} function for evaluating the
probability distribution of contributing area exponent}
The function for evaluating the exponent of the probability
distribution of contributing area, \ie $\beta$ in
\myEqNoSPace{eq:probabilityDrainedArea}, is implemented according
to the following algorithm, which is repeated for each considered
basin (in our case the three biggest ones for each landscape):
\begin{enumerate}
  \item a probability distribution of drained area values is
  computed, considering the values for the cells belonging to the
  current basin. It is important to remind that the values of  
  drained area for each cell are produced as outputs of the model.
  Again, only cells draining more than $4$ cells (\ie $4\
  \text{km}^2$) are considered;
  \item the cumulative probability distribution of drained areas
  is computed, starting from the probability distribution
  generated at step 1;
  \item $\beta$ is evaluated as the slope of the interpolating
  line in the logarithmic plot showing the cumulative distribution
  of drained area, as the example presented in \myFig{fig:pDa}.
\end{enumerate}
\begin{figure}

\myfloatalign
\includegraphics[width=0.8\columnwidth]{Images/pDa.pdf}  
\caption[Probability distribution of drained area]{Probability
distribution of drained area. $\delta$ is the drained area value
the probability $p(DA \geq \delta)$ is iteratively evaluated for
the considered basin. The blue line represents the linear
interpolation, whose slope is the probability distribution of
contributing area exponent.}
\label{fig:pDa}
\end{figure}

\subsection{Evaluating the number of indexes}
After analyzing each point of the Pareto front using the functions
just described, the number of indexes obtained is the following:

\paragraph{Number of Horton indexes $N_{\text{Horton}}$:}
\begin{equation}
  N_{\text{Horton}} = N_{\text{points}} \times N_{\text{basins}}
  \times N_{\text{ratios}}
\label{eq:numberOfHortonIndexes}
\end{equation}
where:
\begin{itemize}
  \item $N_{\text{ratios}} = 4$ since it considers $R_B$, $R_L$,
  $R_L$ and $R_S$;
  \item $N_{\text{points}}$ is the number of points in the Pareto
  front;
  \item $N_{\text{basins}}$ is the number of considered basins,
  for each point (in our analysis it is set to $3$).
\end{itemize}

\paragraph{Number of Hack's law exponents $N_{\text{Hack}}$:}
\begin{equation}
  N_{\text{Horton}} = N_{\text{points}} \times N_{\text{basins}}
\label{eq:numberOfHackIndexes}
\end{equation}

\paragraph{Number of exponents of the probability
distribution of contributing area $N_{\text{probability}}$:}
\begin{equation}
  N_{\text{probability}} = N_{\text{points}} \times
  N_{\text{basins}}
\label{eq:numberOfProbabilityIndexes}
\end{equation}

\paragraph{Total number of indexes ($N_{\text{indexes}}$):}
\begin{equation}
  N_{\text{indexes}} = N_{\text{Horton}} + N_{\text{Hack}} +
  N_{\text{probability}}
\label{eq:numberOfIndexesTotal}
\end{equation}

Assuming as an example a Pareto front composed by $1\,000$ points
and considering the biggest $3$ basins for each \ac{DEM}, we would
obtain a number of indexes to analyze equal to $N_\text{indexes} =
18\,000$.
For this reason, a clustering technique was adopted, in order to
simplify and speed the result analysis phase. It is described in
the following paragraph.

\subsection{Clustering technique}
As explained in the previous section \graffito{A clustering 
technique is needed, in order to face the extremely high number 
of results and analyze all of them.}, since the number of indexes
produced analysing all the points of the Pareto front might be
very big, a clustering technique is adopted, in order to be able
to group points of the front and jointly analyze them.
\blockcquote{ketchen:1996}{Cluster analysis [\ldots] takes a
sample of elements and groups them such that the statistical
variance among elements grouped together is minimized while
between-group variance is maximized}.
The strategy is to group points of the Pareto front, based in the
values of their objectives, select some representative clusters
and then evaluate the distribution of naturality indexes for each
of them.
These two phases, \ie clustering and building the distribution of
naturality indexes are described in the next sections.

\subsubsection{Building the clusters: k-means clustering and
silhouette}
In order to group the Pareto front points in meaningful clusters,
the two following tools were used:
\begin{description}
  \item[$K$-means:] it is the algorithm which, once a parameter
  $k$ is set, organizes the points in $k$ clusters;
  \item[Silhoutte:] it is the criterion considered for setting the
  number of clusters, \ie $k$ for $k$-means algorithm.
\end{description}
These two elements are now detailed.

\paragraph{$k$-means clustering algorithm} It divides a set of
data into $k$ exclusive clusters. Since clusters are exclusive,
\ie each point of the data set belongs to one and just one
cluster, the clustering method is so called \enquote{hard}
clustering.
Each class is characterized by the number of points it includes
and the coordinates of a centroid. A centroid is defined as
\blockcquote{lleti:2004}{the point to which the sum of distances
from all objects in that cluster is minimised}.
In order to set the position of the $k$ centroids and define
clusters in a way that \enquote{the distances from all objects in
the cluster is minimised}, the algorithm is iteratively computed:
\blockcquote{kogan:2006}{The entire procedure is essentially a
gradient-based algorithm}. We used the Matlab\copyright{}
implementation of this algorithm.

\paragraph{Silhouette} It is a criterion which helps finding the
number of clusters which best fits for the given data set.
It works as described in \cite{lleti:2004}:
\begin{enumerate}
  \item a function $\omega(i)$ is defined for the $i$-th point as
  the average distance from the points itself to the other points
  of the same cluster;
  \item a function $b(i)$ is defined for the same $i$-th point as
  the minimum average distance from the point itself to the points
  of another cluster;
  \item silhouette is computed, for the $i$-th point as in
  \myEq{eq:silhouette}
  \begin{equation}
 	s(i) = \frac{b(i) - \omega(i)}{\max[b(i), \omega(i)]}
  \label{eq:silhouette}
  \end{equation}
\end{enumerate}
It is therefore a measure of how well a point fits with the points
belonging to the same cluster, in comparison with how it fits in
other cluster.
It assumes values in the range $[-1;+1]$: is close to $1$ when the
point well fits with its belonging cluster, \ie it is distant from
the neighbouring ones, and vice-versa when it is close to $-1$.

\paragraph{Choosing the number of clusters}
Since the \blockcquote{lleti:2004}{average silhouette width for
the entire data set} can be evaluated as the average of single
points silhouette over the entire data set, the following
iterative procedure was implemented, in order to chose a good
number of clusters $k$ for $k$-means algorithm:
\begin{enumerate}
  \item $k$-means algorithm is run with $k = 2$ \ie two clusters;
  \item the average silhouette width for the entire data set is
  computed, for the given $k$ number of clusters;
  \item $k$ is incremented and the procedure is iterated from step
  1.
\end{enumerate}
In order to avoid a too large number of clusters, a maximum value
of cluster $k$ equal to $\sqrt{\frac{n}{2}}$ was imposed according
to \cite{ketchen:1996}, where $n$ is the number of data
in the dataset.\footnote{As an example, a Pareto front of
$1\,000$ points will be organized in maximum $22$ clusters.}

Moreover, since the Pareto front we obtained present a situation
of conflict at least between two objectives, we usually required a
minimum number of clusters equal to three.
In fact, if the Pareto front is projected on two dimensions (the
ones of the conflicting objectives), a number of clusters equal to
three allows the identification of two extremes areas (where only
one objective is minimized, while the other is not, and vice-versa)
and one compromise among the previous two.
Applying the procedure just described, a clustered Pareto front is
obtained; an example is shown in
\myFigNoSpace{fig:paretoFrontIDWclusters}.

\subsubsection{Naturality indexes for each cluster}
Once the clustering of the Pareto front is performed, and given
the fact that naturality indexes were evaluated for each of the
considered biggest basins, for each point of the front, the values
of the naturality indexes obtained from the synthetic landscapes
(\ie the outputs of the model) must be compared to typical values
characterizing natural basins.
For doing so:
\begin{enumerate}
  \item for each naturality index, for each cluster, the
  statistical distribution of the current index values obtained
  for the basins of the points in the cluster is computed;
  \item the area of the distribution laying within the natural
  range of the index is integrated;
  \item the distributions of the current indicator are compared
  among different clusters.
\end{enumerate}

An example of this kind of analysis is represented in
\myFigNoSpace{fig:HortonAreaClusters_IDW} where the naturality
index $R_A$ is contemplated and 3 clusters are considered. The
bounds represent the typical range assumed by the index in natural
networks. A remainder of the values is summarized in
\myTabNoSpace{tab:natIndexes}.\footnote{Ranges and indexes are
defined in \mySecNoSpace{sec:natIndexes}.}

\begin{figure}
\myfloatalign
\includegraphics[width=0.75\columnwidth]{Images/HortonAreaRatio_clusters.pdf}
\bigskip

\footnotesize
\begin{tabularx}{\textwidth}{p{0.3\textwidth}ccc}
\toprule
& \tableheadline{c}{Min TEE} & \tableheadline{c}{Compromise} &
\tableheadline{c}{MinEE} \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Number of analyzed points} & $40$ & $118$ &
$40$\\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Sample mean and standard
deviation} & $3.258 \pm 0.811$ & $4.343 \pm 1.184$ & $4.037 \pm
1.157$ \\
\midrule
\tablefirstcol{p{0.3\textwidth}}{Probability within natural
range} & $0.4831$ & $0.7463$ & $0.7153$\\
\bottomrule
\end{tabularx}

\caption[Statistical distribution of Horton's area
ratio within clusters.]{Statistical distribution of Horton's area
ratio within clusters. The chosen clusters are the same of
\myFigNoSpace{fig:paretoFrontIDWclusters}: they are the two
extremes of the Pareto front shown in \myFig{fig:paretoFrontIDW}
and a compromise between them. The three graphs show the
statistical distribution of the Horton's area ratio for each of
the three biggest basin of each Pareto front point within the
cluster \ie of each \ac{DEM} member of the cluster. On the $y$
there is the ratio value, on $x$ axis the percentage value. The
gray area corresponds to the probability within the natural
range of variations of the index.}
\label{fig:HortonAreaClusters_IDW}
\end{figure}

\begin{table}
\myfloatalign
\begin{tabularx}{0.83\textwidth}{p{0.35\textwidth}rcl}
\toprule
\tableheadline{c}{Naturality index} &
\tableheadlineMore{3}{c}{Natural range/value}
\\
\midrule
\centering{$R_B$} & $\ \ \,\qquad3$ & $\div$ & $5$\\
\midrule
\centering{$R_L$} & $\ \ \,\qquad1.5$ & $\div$ & $3.5$\\
\midrule
\centering{$R_A$} & $\ \ \,\qquad3$ & $\div$ & $6$\\
\midrule
\centering{$R_S$} & $\ \ \,\qquad1.5$ & $\div$ & $3.5$\\
\midrule
\centering{$h$ \footnotesize{exponent of Hack's law }} &
\multicolumn{3}{c}{$0.6$}\\
\midrule
\centering{$\beta$ \footnotesize{exponent of drained area
probability distribution}} & $\ \ \,\qquad0.43$ & $\div$ &
$0.45$\\
\bottomrule
\end{tabularx}
\caption[Typical natural ranges for naturality indexes]
{Typical natural ranges for naturality indexes.}
\label{tab:natIndexes}
\end{table}