\chapter{Testing Optimality Principles: a framework}
\label{chap:testingOptimality}

\begin{flushright}{\slshape    
Order and simplification are the first steps toward the mastery of
a subject — the actual enemy is the unknown} \\ \medskip
--- \citeauthor{mann:1924}, \citetitle{mann:1924},
\citeyear{mann:1924} 
\end{flushright} 

The idea guiding this work was exposed in the previous chapter, 
but now it requires a formalization effort in order to be 
implemented and used.
The formalization splits the methodology adopted to verify the
hypothesis, from the hypotheses to test itself and, hopefully,
constitutes the second contribution of this thesis to the field.

As a first stage, the problem is formalized in a verbal way, then
it is translated into mathematical terms. That constitutes the
basis to identify the steps required to test the hypotheses, which
are explained sequentially. Secondly, the steps are abstracted
from the problem, becoming the framework which this thesis is
addressed to.
The framework is then critically discussed, in order to underline 
new aspects and opportunities of improvement.

\section{Problem formalization}
\label{sec:formalization}
The main hypotheses \graffito{\ac{LAP} principle and its
formulations are tested through the framework built during this
thesis work.} we want to test with this work are: 
\begin{itemize}
  \item landscape evolution under river dynamics may be   	
  explained as if it follows an optimality principle;
  \item this unique principle and the scale at which it works
  might be discovered by testing many formulations of \ac{LAP}.
\end{itemize}

These hypotheses imply that it is accepted as true that river
evolution follows an optimality principle, but that none of the
formulation of the optimality principle tested so far is capable
of properly reproducing the rivers behaviour. 
Therefore, the proposed solution is to test many principles and to
analyze the conflicts that arise among them, in order to underline
some of the characteristics of the unifying formulation.

\subsection{Mathematical formalization}
It is now necessary to redefine what landscape evolution and
river dynamics are, in order to translate the above sentences 
in a mathematically treatable way.

\subsubsection{Landscape entities}
\label{subs:landscapeFormalization}
A landscape may be represented as a function of two space
variables, \ie two coordinates:
\begin{equation}
z = f(x,y)
\label{eq:elevationFunct}
\end{equation}
where $z$ is the elevation at the point defined by $(x,y)$.
This function $f(\cdot)$ can have many shapes, most of all not
linear. To avoid any assumptions on it, we will identify this
function with a look-up table.\footnote{\ie a finite form of
function in which a certain number of tuple ${z,x,y}$ is known and
represented in a three dimensions numerical matrix.}

The water flow over this landscape may be represented by a
two-dimensional vector field, being $\mathbf{q}(x,y)$ the
discharge per unit of contour length and being its direction
parallel to the gradient vector associated to
\myEqNoSPace{eq:elevationFunct}:
\begin{equation}
\mathbf{q} \propto - \nabla f = - \left(\frac{\partial
f}{\partial x} \hat{i} + \frac{\partial f}{\partial y} \hat{j}
\right).
\end{equation}
Mass conservation principle applied to the flow of water leads to
\begin{equation}
\nabla \mathbf{q} = p(x,y) - \frac{\partial d}{\partial t}
\end{equation}
where $p(x,y)$ is the rainfall excess rate and $\frac{\partial
d}{\partial t}$ represents the partial derivative of flow depth
over the landscape with respect to time. In this thesis, a steady
state solution is searched for, and therefore $\frac{\partial
d}{\partial t} = 0$ and $\nabla \mathbf{q} = p(x,y)$.

Similarly to the water flow, also the flow of sediments may be 
modeled with a vector field $\mathbf{q_s}(x,y)$ parallel to
$\mathbf{q}$. Mass conservation on sediments transportation leads
to
\begin{equation}
\nabla \mathbf{q_s} = U(x,y) - b_t
\label{eq:massConservation}
\end{equation}
where $U(x,y)$ models the rate of tectonic uplift and $b_t$ is the
rate at which bedrock height or soil depth change in $(x,y)$.
$b_t$ may be either negative or positive and represents deposition
or erosion caused by river dynamics. This entities will be
deepened later and their inclusion into the model will be
explained.

\subsubsection{Precipitation}
Any groundwater flux is excluded from the focus of
this work. So the only income of water is the precipitation
$p(x,y)$, particularly the effective precipitation:\footnote{In
hydrology, according to the \ac{USGS} definition, << Effective
precipitation (rainfall) [is] that part of the precipitation that
produces runoff.>> \cite{langbein:1960}, \ie the precipitation in
excess of infiltration capacity, evaporation, transpiration, and
other losses.}
\begin{equation}
p = p(x,y,t,\ldots)
\end{equation}
which depends also on other characteristics such as temperature,
permeability, etc\ldots
Given the focus of this work, we simplify the above expression by
writing the average value as 
\begin{equation}
p = \bar{p} = \mathbb{E}[p(\cdot)]
\label{eq:Hyp1PrecipitationCostant}
\end{equation}
and we will use an unique value of precipitation for the whole
modeled landscape.

\subsubsection{Optimal landscape}
According to the hypothesis that rivers follow optimality
principles in shaping the landscape, it is possible to write
\begin{equation}
f(\cdot) = \arg \min_{f(\cdot)\in F(\cdot)} g(x,y,z) =  \arg
\min_{f(\cdot)\in F(\cdot)} g(x,y,f(\cdot))
\label{eq:genericCost}
\end{equation}
where $g(\cdot)$ is the mathematical formulation of the optimality
principle.\footnote{or the cost function, using the jargon of
control theory.}

The same hypothesis ensures that
\begin{equation}
\exists!\ f(\cdot) \in F(\cdot)\ |\ f(\cdot) = \arg
\min_{f(\cdot)} g(x,y,f(\cdot))
\label{eq:riverDefinition}
\end{equation}

The identification of the above \myEq{eq:riverDefinition}, given a
guessed $g(\cdot)$ function, will be the scope of this thesis.

Given the problem as it has been formalized in the first
paragraphs of this chapter, it is now possible to describe the
macro-elements which compose the framework for testing optimality
principles in landscape evolution under river dynamics.
They are the following three:
\begin{description}
  \item[The model:] it is the part of the framework responsible
  for the evaluation of $g(x,y,f(\cdot))$. It is described in
  \mySecNoSpace{sec:theModel};
  \item[The optimization:] it is the phase of the framework
  responsible for the identification of $f(\cdot)$. The
  methodology is characterized in \mySecNoSpace{sec:optimization};
  \item[The evaluation through naturality indexes:] it is the part
  of the framework responsible for the assessment of the outputs
  provided by the previous two. Its features are described in
  \mySecNoSpace{sec:natIndexes}.
\end{description}

\section{The model}
\label{sec:theModel}
From \graffito{The model extracts drainage networks from a
landscape model and evaluates the values of the optimality
principles to test.} this point further, we will refer to the
model meaning the implementation of the evaluation of
$g(x,y,f(\cdot))$ as defined in \myEqNoSPace{eq:genericCost}. 
All the $g(\cdot)$ functions we evaluated require some common
hydrological variables, like discharge or channel length. 
As a consequence, the evaluation is split into four steps:
\begin{aenumerate}
  \item extraction of the drainage networks from the elevation
  surface, \ie the identification of the drainage direction from
  each point $(x,y)$;\footnote{In geomorphology, a drainage system
  or network is the pattern formed by streams, rivers, and lakes
  in a particular drainage basin. Geomorphologists and
  hydrologists often view streams as part of drainage basins
  \cite{pidwirny_drainage:2006}.}
  \item evaluation of channels length and slope in the drainage
  network;
  \item evaluation of the discharge flowing through the channels; 
  \item assessment of the value of the optimality principle.
\end{aenumerate}

The first two steps are faced by \citeauthor{ocallaghan:1984}
\cite{ocallaghan:1984} in their classic \citetitle{ocallaghan:1984}.
They identify several steps of \ac{DEM} analysis, among which
interior pit removal, drainage direction assignment and drainage
accumulation are interesting with respect to this work.
\begin{description}
  \item[Depression filling] the interior pit removal phase is
  commonly called depression filling. It is a common operation
  when dealing with field data, which present a lot of so called
  false depressions, caused by the measuring process. These would
  be translated into lakes or ponds, without having any relation
  with reality. In the context of this thesis, this operation
  ensures that every point has an outlet and prevents the need of
  modeling lakes. In comparison with \cite{ocallaghan:1984}, the
  phase is anticipated because a different algorithm from
  \cite{planchon_fast:2002} will be used.
  
  This topic will be deepened in the next chapter.
  
  \item[Drainage direction assignment] sometimes called flow
  routing, requires the identification of the directions field.
  This step is the skeleton for the extraction of the drainage
  network from a terrain model. The idea is that water chooses to
  flow toward the steepest descent direction. Many methods to
  perform this operation exist: the most used and famous is called
  \ac{D8}, introduced by \cite{ocallaghan:1984}. It performs a
  local search among the neighborhoods of a point to find the
  direction of steepest descent. The number eight in its name
  comes from the discretization of the possible directions (see
  \mySubsecNoSpace{subsubs:drainageNetworkDiscretization}). Here,
  an improved method derived from \ac{D8} method, called \ac{GD8}
  algorithm from \cite{paik_global:2008} will be used.
  
  \item[Drainage accumulation] is the phase which assigns to each
  point of the drainage network the area draining into that point.
  Given the simplifications introduced in
  \myEqNoSPace{eq:Hyp1PrecipitationCostant}, the precipitation is
  equally distributed over the whole landscape. Therefore, this
  phase also outputs the discharge at any point of the drainage
  network.
\end{description}

The operations just described are commonly used in \ac{GIS} with
hydrological capabilities. A \ac{GIS} is a system designed to
capture, store, manipulate, analyze, manage, and present all types
of geographical data. 
Computer-based \ac{GIS} became widely diffused in recent years. 
Nevertheless, none of them has the capability to connect with a
surface optimizer, while they have many other features which would
be useless for this work, such as geolocalization. Therefore, a
totally new software was developed to perform these operations,
giving us the possibility to control each part of the evaluation
process.
Its practical implementation is shown in
\myChap{chap:instanceOfFramework} and
\myAppendixNoSpace{appChap:theModel}, but some important concepts
for understanding it are introduced in the next paragraphs.

\subsection{Discretization}
\label{subs:discretization}
In mathematics, discretization concerns the process of
transferring continuous models and equations into discrete
counterparts. This process is usually carried out as a first step
toward making them suitable for numerical evaluation and
implementation on digital computers.

\subsubsection{Landscape discretization}
Since our experiment runs on computers, we need to discretize the
elevation field. Again, this problem has already been faced by a
number of scientists, engineers and private companies. A \ac{DEM}
is a digital model or three-dimensional representation of a
terrain surface. A \ac{DEM} can be represented as a raster or as a
vector-based \ac{TIN}.\footnote{\ie A raster is a grid of squares,
each of which contains a data that is representative of the whole
square. They are also known as a heightmap when representing
elevation.}

In this work, raster are used thanks to their easier
implementation and wider diffusion \cite{ocallaghan:1984}. The
landscape, \ie the function $f(\cdot)$ as defined in
\mySubsecNoSpace{subs:landscapeFormalization}, is therefore
represented by a bi-dimensional matrix. Each cell of the matrix
contains an elevation value, which is assigned to an elementary
surface $dx \times dy$. The \ac{DEM} cells are assumed squared,
\ie $dx= dy$, and the length $dx$ defines the scale of the \ac{DEM}
itself, \ie its resolution. The cell is sometimes named pixel
\cite{paik:2012}.

This definition means that the entities involved in
\myEq{eq:riverDefinition} are now defined as
\begin{align}
x & \in X \subset \mathbb{N}_0 & y & \in Y \subset \mathbb{N}_0 & z
& \in Z \subset \mathbb{N}_0 &
\label{eq:xyzDiscretization}
\end{align} 
$\mathbb{N}_0$ is used instead of the full $\mathbb{Z}$ to
simplify the implementation. $x$, $y$ and $z$ can be translated if
needed.
The surface $z(\cdot)$ becomes a matrix containing the $z$-value
at each $x$ row and $y$ column.

\subsubsection{Drainage Networks discretization}
\label{subsubs:drainageNetworkDiscretization}
The drainage network is also discretized, accordingly to the
discretization of the landscape. Since the landscape is composed
by cells with a certain area, each of them will generate a surface
runoff when it is raining on it.

In literature and in common \ac{GIS} applications, the flow
routing is often deterministic: all the runoff from a cell flows
toward another single cell. Given the landscape representation, it
is clear that the possible flow directions are towards the eight
neighbouring cells. This also means that there are only two
possible lengths for the drainage line connecting the centers of
two cells: the cell length, $dx$, and the diagonal length of the
cell $\sqrt{2}dx$.\footnote{As it is common in hydrologic
literature, the length is taken as the planar projection of the
path of the river, which is reasonable for very low slopes like
the ones treated here.}

Another quantity which characterizes a drainage network is the
slope in the main channel. Given a discretized length and a
discretized elevation, the possible values of slope are a subset
of rational numbers.

Finally the drained area at a location $(x,y)$ of the \ac{DEM} 
is the upward area whose rainfall is drained into $(x,y)$ itself. 
In this work, the drained area of a cell becomes equal to the
number of cells which drain into the cell itself and, therefore,
it is discretized as well.

\subsection{Model inputs}
\label{subs:modelInputs}
As seen, function $g(\cdot)$, \ie the optimal principle, may be
evaluated by the model. To do that, the model requires to know 
the landscape, the function $f(\cdot)$. Given the
discretized representation of the \ac{DEM}, the model actually
needs one value of $f(\cdot)$ for each \ac{DEM} cell.

Since a \ac{DEM} can cover many square kilometers and, therefore,
is composed by thousands of cells, the number of inputs is very
large. The experiments presented in \myChap{chap:simAndFindings}
are performed on \acp{DEM} having $2500$ cells (or pixels). The
elevation values for each cell come from the optimizer: using the
terms of multi-objective optimization discipline, they are the
control variables.

However, the great number of elevation values required constitutes
an additional obstacle for the optimization algorithm to
understand the relation between controls and objectives, \ie
between the landscape elevations and the optimality criterion. 
Additionally, the algorithm used here relies on a certain
randomness that is shifted to the landscape. In order to
strengthen the relation between the landscape and the criterion,
and to reduce the random noise of the surface, a spatial
interpolator is introduced and integrated in the model for some
experiments shown in \myChap{chap:simAndFindings}.
Consequently, a smaller number of control variables is requested 
to the optimization algorithm, which can focus more on the search 
for the optimum. 
These values are then interpolated to find the function $f(\cdot)$ 
over the whole landscape.

Many spatial interpolators were evaluated: among them, \ac{IDW}
was chosen. It is very simple and it does not require many
parameters nor complicate setup. Therefore, it was used as a first
try. Its details are explained in
\mySubsec{subs:spatialInterpolator}.

\section{The search for the landscape: optimization}
\label{sec:optimization}
The method \graffito{Optimization looks for the landscapes which 
minimize the optimality criteria.} to evaluate $g(\cdot)$ has
been shown: it requires the elevation surface, \ie function
$f(\cdot)$, defined by \myEqNoSPace{eq:riverDefinition}:
\begin{equation}
\exists!\ f(\cdot) \in F(\cdot)\ |\ f(\cdot) = \arg \min
g(x,y,f(\cdot)).
\label{eq:riverDefinitionCopy}
\end{equation}

The identification of $f(\cdot)$ is, therefore, the search 
for the minimum of function $g(\cdot)$.

\subsection{Multi-objective optimization}
As already said in \mySecNoSpace{sec:MOframework}, the optimality
principles proposed in literature, \ie the functions $g(\cdot)$,
fail to reproduce the whole complexity of landscape evolution.
Therefore, a many-principle approach is applied. Practically, this
means that there are multiple $g(\cdot)$ functions, so it is not
possible to identify a single minimum point.

The mathematical names for this problem are multiobjective
optimization or programming, multicriteria optimization, or
Pareto optimization. The field they belong to is studied under the
topic of multiple criteria decision making and is concerned with
mathematical optimization problems, involving more than one
objective function to be optimized simultaneously. Multiobjective
optimization has been applied in many fields of science, including
engineering, economics and logistics, where optimal decisions need
to be taken in the presence of trade-offs between two or more
conflicting objectives.\footnote{Minimizing weight while
maximizing the strength of a particular component, and maximizing
performance whilst minimizing fuel consumption and emission of
pollutants of a vehicle are examples of multiobjective
optimization problems involving two and three objectives,
respectively.}

For a nontrivial multiobjective optimization problem, a single
solution that simultaneously optimizes each objective function,
\ie each $g(\cdot)$ function in this problem, does not exist. In
that case, the objective functions are said to be conflicting, and
a possibly infinite number of Pareto optimal solutions might
exist. A solution is called non-dominated, Pareto optimal or
efficient if none of the objective functions can be improved in
value without impairment in some of the other objective values.
Without additional preference information, all Pareto optimal
solutions can be considered mathematically equally good.

Mathematically speaking, \citeauthor{cohon:1975}
(\cite{cohon:1975} cited in \cite{hadka_borg:2012}) framed the
multiobjective design problem as a \enquote{vector optimization}
subject to a set of constraints defining the feasible region of
the decision space. We will call it \ac{MOP} as in
\cite{coello:2006}.
\MyEq{eq:minVector} formally defines the multiobjective design
problem as the minimization of a vector, $G(\mathbf{x})$, composed
of $K$ objective functions, $g(\mathbf{x})$:
\begin{gather}
\underset{\mathbf{x} \in \Omega}{\text{minimize}} \quad
G(\mathbf{x}) = \left[ g_1(\mathbf{x}), g_2(\mathbf{x}), \ldots,
g_k(\mathbf{x}) \right] \label{eq:minVector} \\
\text{subject to} \quad c_m (\mathbf{x}) \leq 0,\quad \forall m
\in \xi
\label{eq:constrain}
\end{gather}
where $\mathbf{x} = [x_1, x_2, \ldots, x_n ]$ is a vector of $L$
decision variables in the decision space $\Omega$.
\MyEq{eq:constrain} optionally subjects $F(\mathbf{x})$ to one or
more inequality constraints defined by $\xi$. Therefore it is
possible to define $\Lambda$ as the set of decisions in $\Omega$
that satisfy these constraints.

Given the framework presented above,
\myEq{eq:riverDefinitionCopy}, which defines the solutions of the
problem here presented, has to be rewritten as:
\begin{align}
\exists f_i(\cdot) \in F(\cdot)\ |\ f_i(\cdot) & = \arg \min
\left(J_1, J_2, \ldots, J_K \right). \\
J_k & = g_k(\mathbf{x}, f(\cdot))\\ 
\text{s.t.} \qquad \mathbf{x} & \in \Lambda \subset \Omega
\notag\\
\mathbf{x} & = (x, y) \notag\\
z & = f(x, y) \notag\\
k & = 1, 2, \ldots, K \notag\\
i & = 1, 2, \ldots, \infty
\label{eq:riverDefinitionMultiObj}
\end{align}
where
\begin{itemize}
  \item $f_i(\cdot)$ is $i$-th function that belongs to the set of
  Pareto optimal solutions;
  \item $k$ identify the principle;
  \item $x, y, z$ are subject to \myEq{eq:xyzDiscretization}.
\end{itemize}

\subsection{Dealing with multiple solutions}
\label{subs:multipleSolutions}
It is now clear that adopting many principles of optimality leads
to a set of multiple solutions. This set is called Pareto front.
One might wonder what is the physical meaning of such a result,
remembering that we are trying to simulate natural landscapes.
The simpler explanation is that each solution is the effect of
different weights applied to the objectives. In other words, it is
possible to build a single objective optimization problem which
leads to that solution, being the single objective a proper
weighted sum of the multiple optimality principles. If one assumes
that at a given scale of analysis there is only one
\enquote{proper} solution among the points of the Pareto front,
this means that the real principle is the weighted sum of the
principles used to obtain that solution. \footnote{Other
explanations are possible: \eg nature is random and therefore a
single landscape shape as a response to river dynamics does not
exist. These kind of explanations require a much larger
theoretical study of the problem and are therefore left to
researchers with greater experience on the subject.}

So the question is now shifted on the method to recognize the
\enquote{proper} solution: we will present it in
\mySec{sec:natIndexes}.

\subsection{Multi-objective algorithms}
In literature, many algorithms exist for solving multi-objective
problems. None of them can be said to be generally superior to all
the others \cite{miettinen:1999}. They can be classified roughly
into two main groups by noticing that many of them are focused
\blockcquote[see][p. 63 and following]{miettinen:1999}{[\ldots]
on converting the problem into a single or a family of single
objective optimization problems with a real-valued objective
function termed the scalarizing function.} The simplified problem
is then solved with theories and methods of scalar optimization.
The approach is called scalarization.

Among the others, a further classification has to be made.
Historically, multi-objective optimization had been linked with
decision making, where the goal is to identify the solution 
the decision maker is looking for. Therefore, some of the
algorithms rely on the expertise of the decision maker. These
algorithm are called preference-based methods by Cohon (cited in
\cite{miettinen:1999}) and cannot be used for the problem here
faced.

The remaining algorithms identify the set of Pareto optimal
solutions (or an approximation of that) without additional
information. As \citeauthor{coello:2006} perfectly summed up in
his review \citetitle{coello:2006} \blockcquote[see][ch. 3, p.
3]{coello:2006}{ [\ldots]the Operations Research community has
developed approaches to solve \ac{MOP}s since the 1950s.
Currently, a wide variety of mathematical programming techniques
to solve \ac{MOP}s are available in the specialized literature.
However, mathematical programming techniques have certain
limitations when tackling \ac{MOP}s. For example, many of them are
susceptible to the shape of the Pareto front and may not work when
the Pareto front is concave or disconnected. Others require
differentiability of the objective functions and the constraints.
Also, most of them only generate a single solution from each run.
Thus, several runs (using different starting points) are required
in order to generate several elements of the Pareto optimal set.}

To avoid any assumption on the solution, \ie on the shape of
$f(\cdot)$ functions, a more general algorithm is needed. In case
of discretization and limited feasibility space $\Omega$, it is
possible to try each value and then selecting the Pareto optimal
solutions among them. The problem tackled in this thesis accepts
different controls and if one takes a \ac{DEM} composed by $51
\times 51$ cells with the control for each of them assuming $18$
different values, the total combinations of controls would be
$8.11\times 10^{3013}$: this method can not be used. Instead of
trying each control combination, it is possible to try some of
them and choose the next tentative to try according to some
heuristic. Following this idea,
\blockcquote[see][ch. 3, p. 3]{coello:2006}{\ldots evolutionary
algorithms deal simultaneously with a set of possible solutions
(the so-called population) which allows us to find several members
of the Pareto optimal set in a single run of the algorithm.
Additionally, evolutionary algorithms are less susceptible to the
shape or continuity of the Pareto front (\eg they can easily deal
with discontinuous and concave Pareto fronts)}.
This is the idea evolutionary algorithms are based on.

\subsection{Evolutionary algorithms}
\label{subs:evolutionaryAlgo}
As previously explained, given the poor \textit{a priori}
information available about the problem and the solution, a
\ac{MOEA} seems the best choice. However the main drawback of an
\ac{EA} is that the set of solutions is just the best available
approximation of the real Pareto front. It is impossible to know
how good the approximation is.

Another important drawback is pointed out by
\citeauthor{hadka_diagnostic_2012} in
\citetitle{hadka_diagnostic_2012}: they empirically showed
\blockcquote{hadka_diagnostic_2012}{that parameterization can
greatly impact the performance of an MOEA, and for many
top-performing algorithms, this issue becomes severely challenging
as the number of objectives increases}.
Since parameterization is problem-specific, it is impossible to
know which one is the best for the \ac{MOP} here faced before
solving it.

A good approach for facing the parameterization issue can be found
in \cite{hadka_borg:2012} and \cite{reed_evolutionary:2012}. They
used a sampling method (Latin Hypercube) to generate a set of
possible parameterizations for the evolutionary algorithm they want
to test. Then they solved the problem using each parameter set
many times, to overcome the influence of the random seed used. The
set of solutions is then taken as the best solutions across all
parameterizations, all seeds and all algorithms. The \ac{EA}s,
their parameterization and the number of different random seeds
used is our framework are detailed in
\mySecNoSpace{sec:optimizationAlgorithm},
\mySec{sec:mergingParetoFront} and \mySecNoSpace{sec:experiments}.

\subsubsection{Advantages of \acp{MOEA}}
The choice of applying \acp{MOEA} to solve the optimization
problem arises firstly from \citeauthor{paik:2011}'s work
\cite{paik:2011}: he used a \ac{GA} to solve its single objective
minimization problem. The same approach can be easily extended to
multiple principles. It has the additional advantage of producing
an entire Pareto front in a single run, while it needs few
mathematical properties for the problem to be minimized, compared
to other \ac{MO} algorithms.
Thanks to the availability of state of the art \acp{MOEA} under
open source licenses free of charge, the quality of the solution
found by a \ac{MOEA} is not an issue.

\subsection{Defining the control variables range}
In \mySubsec{subs:modelInputs}, the concept of control variable
was introduced. Usually, when defining control variables, also a
range of values they can assume is established.
In the case of \citeauthor{paik:2011}'s single objective \ac{GLE}
model, each \ac{DEM} cell was subjected to a control variable
which could change the elevation cell by lowering or increasing
it. Each control variable could assume the values $[-100 , 0 ,
+100]$ centimeters.
As it is described in \cite{paik:2011}, the model was sequentially
iterated $15$ times, therefore each cell could vary its elevation
by a value in the range $[-1500 , +1500]$ centimeters, discretized
with step $100$ \ie $1$ meter.

As it is said in \mySubsec{subs:multipleSolutions}, a Pareto front 
composed by multiple solutions is produced as output of our
framework, at the end of each execution. Consequently, it becomes
difficult to make the framework iterate many times.
In fact, at the end of each sequential iteration, a new Pareto
front for every starting condition (\ie for every point of the
Pareto front obtained at the previous iteration) would be created.
As a consequence, the dimensionality of the solutions set would
exponentially grow at each iteration (\eg if the first iteration
produced a Pareto front composed by $1000$ points, $1000$ other
Pareto fronts would be present at the end of the second iteration,
and so on) and the solutions analysis would be extremely
difficult.

In order to overcome this problem, our framework is designed to
perform just one single iteration, but:
\begin{itemize}
  \item for the first experiment, which will be commented in
  \myChapNoSpace{chap:simAndFindings}, the range of values each
  control variable can assume is extended, in order to give each
  \ac{DEM} cell the same freedom as in \citeauthor{paik:2011}'s
  \ac{GLE} model;
  \item when the \ac{IDW} interpolator is integrated in the model,
  the elevation itself of some \ac{DEM} cells becomes a control
  variable (while the others are obtained through interpolation),
  and it can assume almost every value higher than $0$.
\end{itemize}
Specific information about control variables range setting for
each experiment will be provided in
\myChapNoSpace{chap:simAndFindings}.

\section{Naturality indexes}
\label{sec:natIndexes}
It \graffito{Naturality indexes are used to evaluate the different 
solutions of the Pareto front, by comparing the synthetic \ac{DEM}s 
and river networks to natural ones.}
has been explained in \mySubsecNoSpace{subs:multipleSolutions} that
adopting a \ac{MO} approach leads to multiple solutions and,
therefore, the problem of how to evaluate the Pareto front arises.
Evaluating the solutions belonging to the Pareto front means 
evaluating the hypotheses the problem is built on, \ie the initial
guessing about $g(\cdot)$ functions, the building blocks of the
model and the discretization values chosen.

From \mySecNoSpace{sec:formalization}, the process studied here is
landscape evolution under river dynamics: examples of this exist
almost everywhere on the Earth surface. In literature, some
methods to assess common geomorphological features of such a
system are cited. The most important features we will try to
assess are drainage network organization and channels slope
characteristics, in order to consider both 2D ad 3D features of
river networks.
In particular, the indexes adopted for assessing these features
are based on Horton's law and Hack's law, as explained in the next
paragraphs.

\subsection{Horton's laws}
\label{subs:hortonsLaw}
Historically, the first and most important numerical indexes to
asses river geomorphology are Horton's laws
\cite{horton_erosional:1945}. The following part is a summary
of chapter 1.2 from \cite{rodriguez_fractal:2001}. 

The first law expresses the deep regularity of river networks, 
in terms of relations between the parts. 
It is called Horton's law of stream numbers and is expressed as
\begin{equation}
\frac{N(\omega)}{N(\omega + 1)} = R_B
\label{eq:firstHortonLaw}
\end{equation}
where $N(\omega)$ is the number of streams of order $\omega$ and
$R_B$ is called \emph{bifurcation ratio}. $N(\omega)$ is estimated
with the ordering procedure showed in \myFig{fig:hortonStrahler}
developed by Strahler (\cite{strahler_hypsometric_1952} and
\cite{strahler_quantitative_1957}) based on Horton's original
work. 

The second law characterizes the regularity of river networks by
analyzing stream lengths. It is called Horton's law of stream
length:
\begin{equation}
\frac{\bar{L}(\omega + 1)}{\bar{L}(\omega)} = R_L
\label{eq:secondHortonLaw}
\end{equation}
where $\bar{L}(\omega)$ is the arithmetic average of the lengths
of streams of order $\omega$ and $R_L$ is called \emph{length
ratio}.
Typical values of $R_B$ and $R_L$ are $4$ and $2$ respectively,
with a range of [$3$ ; $5$] for $R_B$ and [$1.5$ ; $3.5$] for
$R_L$ (\cite{rodriguez_fractal:2001},
\cite{moisello_idrologia:1999}, \cite{horton_erosional:1945}).

Horton found that $R_B$ and $R_L$ are approximately constant
through semi-log plot of $N(\omega)$ and $\bar{L}(\omega)$ against
order $\omega$. The two ratios $R_B$ and $R_L$ are estimated as 
the slope of linear fitting of those plots 
(right side of \myFig{fig:hortonStrahler}).

\begin{figure}
\myfloatalign
\includegraphics[width=1.0\columnwidth]{Images/hortonStrahlerOrdering.pdf}
\caption[Horton-Strahler ordering procedure]{Horton-Strahler
ordering procedure. On the left: a drainage network is
classified according to Horton-Strahler ordering procedure; the
numbers next to the links are the orders $\omega$ evaluated with
that procedure. On the right: an example of the semi-log plot used
to evaluate $R_B$ and $R_L$.}
\label{fig:hortonStrahler}
\end{figure}

Horton did not explicitly include basin areas in his laws, but he
suggested that areas should satisfy a geometric relation like
stream lengths and numbers (\cite{horton_erosional:1945} and
\cite{shreve_stream:1969}). The law of basin areas was explicitly
stated by \citeauthor{schumm_evolution:1956}
\cite{schumm_evolution:1956}:
\begin{equation}
\frac{\bar{A}(\omega)}{\bar{A}(\omega + 1)} = R_A
\end{equation}
where $\bar{A}(\omega)$ is the mean total area contributing to
streams of order $\omega$ and $R_A$ is the \emph{area ratio} and
has typical value of $5$ \cite{rodriguez_fractal:2001}.

Also, Horton did not state any mathematical law to express the
relation between average channel slope and order of the stream.
Nonetheless, he showed a semi-log plot of average channel slope
against stream order in \cite{horton_erosional:1945}, suggesting
an inverse geometric-series law. \citeauthor{yang:1971}
\cite{yang:1971} explicitly stated the relation
\begin{equation}
\frac{\bar{S}(\omega)}{\bar{S}(\omega + 1)} = e^F = R_S
\end{equation}
where $S(\omega)$ is the average slope of streams of order
$\omega$, $F$ is a constant and $R_S$ is called \emph{stream
concavity} \cite{yang:1971}.

\subsubsection{Meaning of Horton's laws}
As \citeauthor{rodriguez_fractal:2001} pointed out,
\blockcquote[see][p. 6]{rodriguez_fractal:2001}{the regular
geometric relationships contained in Horton's law and observed in
natural drainage networks have been interpreted as the signature
of some particular evolutionary criteria in network organization.
They have also been interpreted as evidence that drainage networks
are topologically random, meaning that chance is the only criteria
operating on the organization of the network. \ldots it has also
been argued that Horton's relations are not specific to any kind
of networks as they describe the vast majority of networks, random
or not.}

In this work, the focus is not to understand whether Horton's
relations are specific for river networks or not. Therefore, we
base the use of these relations on the experience of many
researchers and hydrologists before us \cite{strahler_quantitative_1957},
\cite{rodriguez_power:1992}, \cite{paik:2010}. Horton's relations
are used to demonstrate whether model outputs contain valid networks
but other evaluation methods are used in order to strengthen the
closeness to river networks.

\subsection{Hack's law}
\citeauthor{hack:1957} applied a power function to relate
length and area of streams in the Shenandoah Valley, Virginia.
\citeauthor{gray_interrelationships:1961} (cited in
\cite{rigon_hacks:1996}) and many other researchers later refined
and corroborated the original analysis by Hack's and nowadays
the following \myEq{eq:HacksLaw} is generally accepted and called
Hack's law:
\begin{equation}
L \propto A^h
\label{eq:HacksLaw}
\end{equation}
$L$ is the length of the longest stream from an outlet to the
divide, $A$ is the drained area at that point and $h$ is generally
accepted to be slightly below $0.6$, even if it may vary from
region to region.\footnote{\cite{rigon_hacks:1996} demonstrates
the validity of Hack's law for any point inside a basin.}

\citeauthor{rigon_hacks:1996} \cite{rigon_hacks:1996} also
suggested that 
\begin{equation}
\gamma = \frac{\beta}{h}
\end{equation}
where $h$ is the exponent in Hack's law. $\beta$ comes from
\begin{equation}
\label{eq:probabilityDrainedArea}
P[A > a] \propto a^{-\beta}
\end{equation}
as \cite{rodriguez_power:1992} found, being $A$ the contributing
area at a randomly chosen point in a drainage network. $\gamma$
instead is the exponent in
\begin{equation}
P[L > l] \propto l^{-\gamma}
\end{equation}
where $l$ is the stream length from any point to the divide.
Typical values for $\beta$ and $\gamma$ are $0.43\div 0.45$
and $0.70 \div 0.90$, respectively \cite{rigon_hacks:1996}. This
rather comprehensive set of relations and the consistency of the
measured exponents lead \citeauthor{rigon_hacks:1996} to suggest
that this scaling theory is a complete description of the planar
organization of basins and their fractal behaviour.

\subsection{Model Outputs Evaluation}
Hack's and Horton's relations, together with the analysis of
rivers longitudinal profiles, are going to be the assessment
method for the 3D river networks produced by the model developed
in this thesis.
The descriptive indexes above explained are, in fact, used to
understand which landscapes generated by the optimization process
are a reasonable representation of natural landscapes. Actually,
they are assessed for the basins produced and their values are
then compared with the natural ones. This allows the understanding
of the Pareto front produced by the optimization and therefore a
critical analysis of the hypothesis assumed into the model, \eg
the shape and number of the optimality principles, the $g(\cdot)$
functions and the procedure used to extract river networks from
the landscape.

\section{Hidden assumptions on the comparison with nature}
In \graffito{This work relies on the fact that there is no
time dynamic described by the principles \ldots} order to compare
the results of the model with natural landscapes, two more
hypotheses are needed. The first arises from a temporal analysis
of the model structure. In fact, natural landscapes are the
product of thousands of years of geological processes like
tectonic uplift, river dynamics \eg erosion, transport and
deposition, \ldots The model here developed emulates the effect of
a certain rate of rainfall, \ie it searches for the landscape
shape that minimizes energy expenditure principles, under the
assumption of the existence of such principles. It does not take
into account any previous rain event and, when the interpolator is
inserted in the model, the previous shape of the landscape \ie
the history of that landscape.
Comparing model outputs to natural river networks is possible
under the assumption that optimality principles used here to
derive model outputs are always valid and does not define an
optimal point toward which river networks move during a
transitory. These optimality principles do not describe any
temporal dynamics.

Another \graffito{\ldots and that anisotropy of the
landscape components and the optimal principle are
indipendent.} strong assumption underlying this model is the
isotropy of the landscape. There is no simulation of different
geological composition of rocks, sand or clay layers. From the
point of view of erosion, transport and deposition, this does not
mean that these processes can act indifferently on any part of the
landscape, because they are not simulated in the model.
In fact, there is no process simulated in the model. The model just
produces a \enquote{picture} of a landscape subjected to rainfall
and river dynamics. This assumption rather means that the optimal
principle searched for explaining landscape evolution is
independent from the anisotropy of natural landscapes. This is the
second hypothesis needed to use statistics taken from the real
world to evaluate the results of the model here presented.

\section{Testing optimality principles: some remarks}
\label{sec:testingOptimality}
In the previous sections, the most important aspects of the
work-flow used in this work were shown and explained. The
framework to test multiple optimality principle to explain
landscape evolution was depicted and explicitly shown.

The framework gives a rational procedure to isolate the
hypothesis, allowing an assessment of them based on results.
Specifically, the hypotheses here used concern:
\begin{itemize}
  \item the optimality principles chosen, \ie the shape of the
  $g(\cdot)$ functions in \myEqNoSPace{eq:genericCost};
  \item the optimization process \ie the $f(\cdot)$ functions;
  \item the procedure to evaluate the value of the principles on a
  given landscape, \ie the model (\mySecNoSpace{sec:theModel}).
\end{itemize}
From a bottom up perspective, different models can be compared, 
other hypotheses being equal, by comparing the value of Horton's
and Hack's relations. Once the model is set, one is allowed to
test different optimization processes and algorithms and to assess
them by relative comparison. The same can be done with the chosen
optimality principles.

The framework relies on two untested hypotheses:
\begin{itemize}
  \item landscape evolution under river dynamics follows an
  optimality principle;
  \item the set of indexes used in the evaluation part is suitable
  to identify natural landscapes among many possible
  landscapes.\footnote{see \mySecNoSpace{sec:natIndexes}.}
\end{itemize}
Testing them is required to completely validate the results of
this work, but a broader review of the literature and multiple
experiments are needed. However, many researchers assumed their
truthfulness and based their researches on them.

\bigskip

At this point of the thesis, all theoretical concepts about the
topic of landscape evolution under river dynamics and about the
different parts of the framework we introduce have been explained.
The second part of the thesis is devoted to the explanation of the 
framework technical implementation and the comment of the obtained
results.

