% Chapter X

\chapter{Results} % Chapter title
\label{ch:results} % For referencing the chapter elsewhere, use \autoref{ch:name} 
The \nameref{ch:results} chapter is dedicated to the discussion of the results we obtained through the application of our method to the \nameref{sec:exsetup} presented in the homonym section.

We start by describing the different scenarios (Section \ref{sec:scenario}) that characterizes our experimental setup: scenario Uniform (Section \ref{subsec:A}), Biased (Section \ref{subsec:B}), Corridor (Section \ref{subsec:C}).

Then, we present the \nameref{sec:metrics} that we have devised to evaluate the three most important properties of our methods: \emph{Allocation uniformity},\emph{Allocation speed}.

The core of the chapter is represented by sections \nameref{sec:allocunif} and \nameref{sec:allocatime}, devoted to a thorough analysis of the aforementioned properties.

We conclude by evaluating the difficulty of the scenario (Section \ref{sec:diff}).

%on the performance of the \emph{Informed} (odometry-based) method (Section \ref{sec:odimpact}).

%----------------------------------------------------------------------------------------

\section{Scenarios}
\label{sec:scenario}
%\begin{table}[H]
%\myfloatalign
%\begin{tabularx}{\textwidth}{Xll} \toprule
%\tableheadline{Method} & \tableheadline{Arena Shape} & \tableheadline{Robot nest} \\ \midrule
%\tableheadline{scenario Uniform} & Square &  Center \\ \midrule
%\tableheadline{scenario Biased} & Square & Corner \\ \midrule
%\tableheadline{scenario Corridor} & Rectangle & Bottom \\
%\bottomrule
%\end{tabularx}
%\caption[Overview of the developed methods]{Overview of the developed methods \citeauthor{knuth:1976}}  
%\label{tab:example}
%\end{table}
The problem statement (cf. Definition \ref{def:problem}) gives us several degrees of freedom in the definition of an experimental setup.
In section \nameref{sec:exsetup}, we clarified the number of robots, the number of tasks and the presence of aggregates of clusters.
However, the distribution of these groups of tasks in space and the initial distribution of robots has not been defined.
By varying these two aspects, we devised three different scenarios.
The purpose of this choice is to test the performances of the methods in different environments, in order to determine the weaknesses and the strong points of each method. 

\subsection{Uniform}
\label{subsec:A}
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.5]
    \draw[dashed,black,fill=green!20] (-0.5,0.5) rectangle (0.5,-0.5);    
    \draw[black,very thick](-2,2) rectangle (2,-2);
      

    % Clusters
    \cluster{Cluster1}{1}{(1, 1)};
	\cluster{Cluster2}{2}{(-1, 1)};
	\cluster{Cluster3}{3}{(-1, -1)};	
	\cluster{Cluster4}{4}{(1, -1)};
\end{tikzpicture}
\caption[Representation of the cluster disposition in scenario Uniform]{Representation of the cluster disposition in scenario Uniform.
 
Clusters are represented by circles, while the dashed rectangle indicates the robot deployment area.

The arena size is $4m$ x $4m$, while the deployment area is $1m$ x $1m$.}\label{fig:scenarioA}
\end{figure}

The scenario Uniform (Figure \ref{fig:scenarioA}) is characterized by having the deployment area in the center of the arena, thus being equally distant from all the clusters.
\graffito{The notion of difficulty of a scenario will be clarified in \protect\nameref{sec:metrics}}
This is the first scenario we developed and the simplest one, which will serve as a baseline for comparison with the other ones.

\subsection{Biased}
\label{subsec:B}
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.5]
    \draw[dashed,black,fill=green!20] (1,-1) rectangle (2,-2);    
    \draw[black,very thick](-2,2) rectangle (2,-2);
      

    % Clusters
    \cluster{Cluster1}{1}{(1,1)};
	\cluster{Cluster2}{2}{(-1-0.3,1+0.3)};
	\cluster{Cluster3}{3}{(-1, -1)};	
	\cluster{Cluster4}{4}{(0, 0)};
      \end{tikzpicture}
\caption[Representation of the cluster disposition in scenario Biased]{Representation of the cluster disposition in scenario Biased.

Clusters are represented by circles, while the dashed rectangle indicates the robot deployment area.

The arena size is $4m$ x $4m$, while the deployment area is $1m$ x $1m$.}\label{fig:scenarioB}
\end{figure}

In the scenario Biased (Figure \ref{fig:scenarioB}), the deployment area is moved in the bottom left corner of the arena and cluster $2$ is placed near the opposite corner of the environment.
This results in having three clusters closer to the deployment site and the fourth one far away.
We introduced this bias in the environment in order to test the influence of a non-uniform positioning of the clusters to the performances of the method.

\subsection{Corridor}
\label{subsec:C}
\begin{figure}
\centering
\begin{tikzpicture}[scale=1.5]    
    \draw[dashed,black,fill=green!20] (-0.5,-1.5) rectangle (0.5,-2.5);    
    \draw[black,very thick](-1,2.5) rectangle (1,-2.5);
      

    % Clusters
    \cluster{Cluster1}{1}{(0, 2)};
	\cluster{Cluster2}{2}{(0, 1)};
	\cluster{Cluster3}{3}{(0, 0)};	
	\cluster{Cluster4}{4}{(0, -1)};
	
\end{tikzpicture}
\caption[Representation of the cluster disposition in scenario Corridor]{Representation of the cluster disposition in scenario Corridor.

Clusters are represented by circles, while the dashed rectangle indicates the robot deployment area.

The arena size is $2m$ x $5m$, while the deployment area is $1m$ x $1m$.}\label{fig:scenarioC}
\end{figure}

In scenario Corridor (Figure \ref{fig:scenarioC}) we placed the clusters in a narrow rectangular arena, closely resembling a corridor.
The clusters are equally spaced among them and the deployment area is located at one edge of the corridor.
The motivation of the scenario Corridor is to test the impact of inter-robot interference on the performance of the methods.
In fact, the environment presents a limited space around the clusters, due to the proximity of the walls and the cluster themselves, which will cause the robots to aggregate and potentially interfere with each other.

\section{Metrics}
\label{sec:metrics}
\subsection{Allocated robots}
Since we are dealing with methods to allocate robots to tasks, the most intuitive aspects we can analyze are, respectively: the number of robots that could be actually allocated and the time required to achieve such allocation.
By indicating with $s_i(t)$ the internal state of robot $i$ at time $t$, we can define the number of allocated robot at each time step $t$ as:
\begin{equation}
R(t) = \sum_{i=1}^{20} (s_i(t)==\text{\emph{Perform Task}}) 
\end{equation}
We analyze this metric under two distinct points of view.
On one hand, we look at the number of robots allocated at each time step, analyzing, for instance the maximum and minimum number of robots allocated at a given time step $t$ across a set of trials.
On the other hand, by looking at the evolution in time of this allocation function, we can easily compare the performances of the methods.

\subsection{Maximum error}
In section \nameref{sec:probstat}, we defined the allocation error $e_i(t)$ for each cluster as the difference between the desired number of robots in the cluster (i.e. the request $r_i$) and the current number of robots in the cluster (i.e. the occupation $o_i(t)$).
This measure easily allow to understand and visualize how well the robots are allocated on a certain cluster.

Our goal, on the other hand, is to distribute the robots uniformly across the clusters, hence this value is of a little interest to us.
Nevertheless, by computing an aggregate measure from all the clusters, we could have a better view on the overall allocation.

Hence, we compute the maximum error across cluster:  
\begin{equation}
e_{\max}(t) = \max_{i \in\{1,\cdots,4\}} e_i(t) = \max_{i \in\{1,\cdots,4\}} (r_i-o_i(t))
\end{equation}
Indeed, the value of $e_{\max}(t)$ corresponds to the greatest difference between request and occupation in all the clusters.
Moreover, a uniform allocation of the robots is the one that better distributes the robot across the clusters.
This entails that, in case of a uniform allocation, the error $e_i(t)$ in each cluster will not be null, but the maximum error will be reduced, since the fair distribution of the robot will reduce the differences $r_i-o_i(t)$ in each cluster.

The notion of fair distribution is applied to determine the lower bound for this metric, the optimal allocation error $e_{opt}$:
\begin{equation}
e_{opt} = \lfloor \frac{m - n}{C} \rfloor + 1
\end{equation}
Here, $m$ corresponds to the global number of tasks, $n$ is the number of robots and $C$ the number of clusters in the environment.
The optimal allocation error $e_{opt}$ corresponds to the value of $e_{\max}(t)$ that would be obtained by allocating iteratively one robot per cluster until their complete allocation.
% Not correct, rewrite.


\subsection{Allocation levels}
Another way to measure how evenly are the robots distributed across cluster is to determine if certain allocation levels are reached.
The concept of allocation level is defined with respect to the relative occupation $\frac{o_i(t)}{r_i}$ of each cluster.
As a matter of fact, if the relative occupation of all the clusters is greater than a certain value $x$, we could say that the allocation level $x$ has been reached.
Whenever this occurs, we are also able to determine the time step $o$ at which the level is attained:
\graffito{$\bigwedge_i$ corresponds to the logical AND operation extended to every cluster $i$.}
\begin{align}
o_{x} = \underset{t}{\arg\min}(\bigwedge_{i=1}^{4} \frac{o_i(t)}{r_i} \ge x) &  & x \in {0.25,0.50} 
\end{align}
$o_x$ corresponds to the first time step at which the relative allocation of all the clusters is above the desired threshold $x$.

The values of $x$ are selected in order to have representative and feasible measures for the cluster allocation.
For this reason, given the distribution of requests $r_i$ and the optimal allocation error $e_{opt}$, the value of $0.75$ has been discarded since it was incoherent with respect to the notion of fair allocation (i.e. the relative allocations of each cluster in the case of a fair allocation are not guaranteed to be above the aforementioned threshold).
It should be noted that, given the previous definition of fair allocation, it is required to attain at least a relative occupation of all the clusters greater or equal than 0.5, in order to achieve the optimal allocation error $e_{\max}$.

\subsection{Cluster views}
In section \nameref{sec:scenario} we presented the developed scenarios and the motivations behind their structure.

What we actually did, at design time, was formulating \emph{hypothesis} on how the structure of the environment could potentially affect the performances of the methods.

In order to \emph{test} our assumptions, we decided to measure the number of times that cluster $i$ has been seen at time $t$:
\graffito{The cluster-robot distance is measured from the robot to the closest TAM belonging to cluster $i$.}
\begin{equation}
v_i(t) = \sum_{s=0}^t \sum_{k=1}^{20} (d_{ik}(s) \le c_r)
\end{equation}
Here, $d_{ik}(s)$ represents the distance of robot $k$ from cluster $i$ at time $s$, while $c_r$ corresponds the range of the omni-directional camera ($50$ cm in our experiments).
Through the analysis of the number of visits, we expect to assess whether the number of visits to a certain cluster is biased by the nature of the scenario and whether the differences in the methods have an impact on the exploration of the environment.

%----------------------------------------------------------------------------------------


\section{Allocation uniformity}
\label{sec:allocunif}
The main goal of our methods is to diffuse the robots evenly across the spatially distributed clusters.

In order to achieve these results, we improved the \emph{Naive} method by introducing probabilistic mechanisms (\emph{Probabilistic}) and informed decisions \emph{Informed}) to favor the redistribution of the robots from crowded clusters to the empty ones.

Here, we analyze the performances of the three methods, by focusing on the final allocation achieved by each of them on a set of $50$ simulations.
Each simulation is characterized by a seed value, which will be used to initialize the simulator's internal random number generator, thus influencing the stochastic behavior of the method.
As a matter of fact, the seed will influence the initial robot placement and orientation and all the probabilistic decisions made by the methods.
We run each simulation for 1000 $s$, corresponding to 10000 simulated time steps, using the same set of 50 seeds for all the different methods, on the same scenario.

In order to evaluate the final allocation, we compute the maximum error across cluster $e_{\max}(t)$ and the number of allocated robots $R(t)$ for $t=10000$.
Instead of presenting the whole distribution of values, we decided to aggregate the values using non-parametric statistics, namely median value and interquartile range.
Moreover, since we are dealing with integer values (e.g. number of tasks or robots) the computation of some statistics (e.g. mean) could results in decimal values having no real, physical meaning.

The results are shown in Tables \ref{tab:sumemax} and \ref{tab:sumr}.

\subsection{Maximum error}
\label{par:unifallocrob}

\begin{table}[H]
\centering
\hbox{\hspace{-1.5cm}
\begin{tabular}{l c c c c c c}
\toprule % Top horizontal line
\tableheadline{Method} & \multicolumn{6}{c}{\spacedlowsmallcaps{Scenario}} \\ % Amalgamating several columns into one cell is done using the \multicolumn command as seen on this line
\cmidrule(l){2-7} % Horizontal line spanning less than the full width of the table - you can add (r) or (l) just before the opening curly bracket to shorten the rule on the left or right side
& \multicolumn{2}{c}{\spacedlowsmallcaps{Uniform}} & \multicolumn{2}{c}{\spacedlowsmallcaps{Biased}} & \multicolumn{2}{c}{\spacedlowsmallcaps{Corridor}} \\ 
\tableheadline{Statistic} & Median & $(q_{25},q_{75})$ & Median & $(q_{25},q_{75})$ & Median & $(q_{25},q_{75})$ \\
\midrule
\midrule 
\tableheadline{Naive} & 3 & (3,4) & 4 & (4,5) & 6 & (6,6) \\ 
\tableheadline{Probabilistic} & 3 & (3,3) & 3 & (3,4) & 5 & (4,5) \\ 
\tableheadline{Informed} & 3 & (3,3) & 3 & (3,3) & 5 & (4,5) \\ 
\midrule 
\bottomrule 
\end{tabular}}
\caption[Summary of the values of the maximum allocation error $e_{\max}(t)$ at $t=10000$.]{Summary of the values of the maximum allocation error $e_{\max}(t)$ at $t=10000$ for a swarm of 20 \emph{e-pucks} with 25 available tasks.

Median values and corresponding inter-quartile ranges are computed across 50 trials of 10000 time steps each.

The optimal allocation error $e_{opt}$ is equal to 2.}\label{tab:sumemax}
\end{table}

Table \ref{tab:sumemax} reports the median maximum error across the clusters at the end of the experiment.

The inter-quartile range is added in order to give informations concerning the dispersion of the values (by definition, half of the values of the sample are included in the inter-quartile range). 

At a first glance we can observe that no method is able to achieve the optimal allocation error $e_{\max}$ of 2 tasks, but on scenario Uniform and scenario Biased the \emph{Probabilistic} and \emph{Informed} methods are able to reach an error of 3 tasks, with no variability.

Furthermore, by comparing the methods we are able to see that the enhanced methods (i.e. \emph{Probabilistic} and \emph{Informed}) have similar performances and both clearly outperform the \emph{Naive} one.

Also, by looking at the different scenarios, we can see an increase in the values of the median maximum error and in its variability, indicating a growing complexity of the environments.

\subsection{Allocated robots}
\begin{table}[H]
\centering
\begin{tabular}{l c c c c c c}
\toprule % Top horizontal line
\tableheadline{Method} & \multicolumn{6}{c}{\spacedlowsmallcaps{Scenario}} \\ % Amalgamating several columns into one cell is done using the \multicolumn command as seen on this line
\cmidrule(l){2-7} % Horizontal line spanning less than the full width of the table - you can add (r) or (l) just before the opening curly bracket to shorten the rule on the left or right side
& \multicolumn{2}{c}{\spacedlowsmallcaps{Uniform}} & \multicolumn{2}{c}{\spacedlowsmallcaps{Biased}} & \multicolumn{2}{c}{\spacedlowsmallcaps{Corridor}} \\ 
\tableheadline{Statistic} & Median & Range & Median & Range & Median & Range \\
\midrule
\midrule 
\tableheadline{Naive} & 20 & (12,20) & 20  & (9,20) & 19 & (6,20) \\ 
\tableheadline{Probabilistic} & 20 & (20,20) & 20 & (20,20) & 20 & (18,20) \\ 
\tableheadline{Informed} & 20 & (20,20) & 20 & (20,20) & 20 & (18,20) \\ 
\midrule 
\bottomrule 
\end{tabular}
\caption[Summary of the values of the number of allocated robots $R(t)$ at $t=10000$.]{Summary of the values of the number of allocated robots $R(t)$ at $t=10000$ for a swarm of 20 \emph{e-pucks} with 25 available tasks.

Median values and corresponding ranges ($\min$,$\max$) are computed across 50 trials of 10000 time steps each.}\label{tab:sumr}
\end{table}

Table \ref{tab:sumr} offers a complementary vision on the final allocation, displaying the median number of allocated robots.

Here, instead of focusing on the inter-quartile range, the whole range of the 50 sampled values is presented, in order to verify whether all the methods are able to allocate all the robots or some problems arise.

As for the median values, all the methods are able to achieve the complete allocation of the robot, except the \emph{Naive} method on \emph{scenario Corridor}.

The same trend described in \nameref{par:unifallocrob} can be observed here: there is a clear difference between the \emph{Probabilistic} and \emph{Informed} methods' performances and the \emph{Naive} one.

The remarkably low minimum number of allocated robots of the \emph{Naive} method can be explained through the absence of the \emph{stalemate} check.
In fact, since no scenario presents obstacles or considerably narrow passages that could cause the robots to get stuck in them, we could safely suppose that after a sufficiently long time all the robots will eventually be allocated.
Moreover, the same figures for the minimum numbers of allocated robots are not present in the \emph{Probabilistic} and \emph{Informed} method, where the \emph{stalemate} condition is used.

Another view of the data concerning the number of allocated robot $R(t)$ can be found in the annex, section \ref{sec:annexallocated}.

\subsection{Allocation levels}
\begin{table}[H]
\centering
\begin{tabular}{l c c c c c c}
\toprule % Top horizontal line
\tableheadline{Method} & \multicolumn{6}{c}{\spacedlowsmallcaps{Scenario}} \\ % Amalgamating several columns into one cell is done using the \multicolumn command as seen on this line
\cmidrule(l){2-7} % Horizontal line spanning less than the full width of the table - you can add (r) or (l) just before the opening curly bracket to shorten the rule on the left or right side
& \multicolumn{2}{c}{\spacedlowsmallcaps{Uniform}} & \multicolumn{2}{c}{\spacedlowsmallcaps{Biased}} & \multicolumn{2}{c}{\spacedlowsmallcaps{Corridor}} \\ 
\tableheadline{Level} & 0.25 & 0.50 & 0.25 & 0.50 & 0.25 & 0.50 \\
\midrule
\midrule 
\tableheadline{Naive} & 0.900 & 0.660 & 0.240 & 0.020 & 0.200 & 0.020 \\ 
\tableheadline{Probabilistic} & 1.000 & 0.800 & 0.800 & 0.440 & 0.800 & 0.040 \\ 
\tableheadline{Informed} & 1.000 & 0.740 & 0.900 & 0.680 & 0.800 & 0.060 \\ 
\midrule 
\bottomrule 
\end{tabular}
\caption[Summary of the probabilities to reach allocation levels $o_{25}$ and $o_{50}$ across 50 trials within 10000 time steps]{Summary of the probabilities to reach allocation levels $o_{25}$ and $o_{50}$ across 50 trials within 10000 time steps for a swarm of 20 \emph{e-pucks} with 25 available tasks.}
\label{tab:sumprob}
\end{table}

Table \ref{tab:sumprob} summarizes the probabilities to reach the allocation levels $25$\% and $50$\% within the chosen experiment duration (10000 time steps).
The analysis of this results allows us to highlight the limitations of the proposed methods.
In fact, no method is able to achieve high probabilities for the allocation level $50$\% (necessary condition for an optimal allocation), on all the scenarios.
Nevertheless, the \emph{Informed} and \emph{Probabilistic} methods are able to successfully achieve the 25\% allocation level on all the trials, and the 50\% level in a relevant number of simulations. 

The \emph{Naive} method also have fairly good performances on the \emph{scenario Uniform}, but the probabilities have a significant drop on scenarios \emph{Biased} and \emph{Corridor}.
We suppose that this variation is determined by the combination of the biased placement of the clusters across the \emph{scenario Biased} and the greedy allocation rule of the \emph{Naive} experiment, which hinder the possibility of having a uniform allocation.
Another possible explanation for the performances of the \emph{Naive} method on scenario Corridor, especially for the allocation level $25$\%, can be given by looking at Table \ref{tab:sumvisitsB}.
In case \emph{Naive-scenario Corridor} we can observe a high variability on the number of allocated robots, reaching a minimal value of $6$ units.
By referring to Table \ref{tab:requests}, we can observe that this value is smaller than the number of requests of a single cluster, thus making the likelihood of a uniform allocation across the clusters really low, using a greedy approach.

It should be noted that the performances of the other two methods are also affected by the change in scenarios.
However, the impact of this change is less dramatic than the \emph{Naive} one, with the \emph{Corridor} still being the most difficult to tackle (i.e. the one having the smallest values for the cumulated probabilities of $o_{25}$ and $o_{50}$).

Again, by comparing the performances of the different methods, we notice the similarity between the \emph{Probabilistic} and \emph{Informed} methods' results.
The only remarkable difference is the higher value for the probability of reaching 50\% allocation in scenario \emph{B}.
This result can be related with the use of the \emph{odometry} in a reasonably large environment, as the one depicted in Figure \ref{fig:scenarioB}.
The wider nature of this environment limits the number of encounters among the robots, after the initial deployment phase.
Considering our implementation of the \emph{random walk} in both the \emph{Probabilistic} and \emph{Informed} methods, less encounters between the robots implies a reduced number of random turns, which in turn limits the exploration of the environment.
Due to the \emph{odometry}-based change of direction, to avoid returning to already visited clusters, the \emph{Informed} method presents an higher probability to make random turns, thus it should better explore the environment and potentially achieve a more uniform distribution.


\section{Allocation speed}
\label{sec:allocatime}

The \nameref{sec:allocunif} section has been devoted to the evaluation of the uniformity of the allocation across clusters, mainly through the analysis of the final results achieved by the methods.

Another interesting point-of-view on our data can be given by analyzing how the methods have achieved these results, instead of restraining ourselves to the final values of the proposed metrics.

The simulation setup is the same as above, $50$ simulations of each method, on each scenario of $10000$ time steps each ($1000$ s), using the same set of seeds on each scenario to ensure the same initial condition to all the methods.

We try to gather information on the speed of the allocation by looking at the evolution of the maximum error across clusters $e_{\max}(t)$ and the number of allocated robots $R(t)$ for $t \in \{1,\cdots,10000\}$.
The two metrics allows us to focus on both the \emph{uniformity} aspect and the allocation \emph{velocity} one.
On one hand, the evolution of $e_{\max}(t)$ allows us to understand how much time do the methods require to bring the overall error across clusters below a certain threshold, explaining also how uniformly are the robots distributed across clusters at a certain moment in time.
On the other hand, $R(t)$, captures how many robots are allocated to a task at a certain time step, making no distinction among clusters.
Through the comparison of the number of allocated robots for the different methods, we are able to evaluate how fast is each method in assigning the robots to the tasks.
In addition, we analyze the empirical cumulative distribution functions of the times needed to reach the allocation levels 25\% and 50\% ($o_{25}$ and $o_{50}$) in order to have a different standpoint on the \emph{uniformity} of the allocation.

As for the $e_{\max}(t)$ and $R(t)$, side by side comparisons of the plots including all the quantiles and the mean values can be found in the annex, chapter \ref{ch:addplots}.

\subsection{Scenario Uniform}
%\begin{figure}[H]
%\centering
%\includegraphics[width=0.8\linewidth,keepaspectratio]{{./Figures/A.Median}.pdf}
%\caption[Median value of $e_{max}$ on scenario Uniform]{Median value of $e_{max}$ on scenario Uniform across 50 trials of 10000 time steps each.}
%\label{fig:Amedianmax}
%\end{figure}


By looking at Figure \ref{fig:AMaxErrorRobots}, we can see that, considering the median number of allocated robots, \emph{Naive} method is able to achieve the fastest allocation in \emph{scenario Uniform} (Figure \ref{subfig:ARobots}), with all the
the 20 robots successfully performing a task in less than 1500 time steps.
Up to 500 time steps, the \emph{Probabilistic} and \emph{Informed} curves are paired but eventually, the \emph{Probabilistic} method achieves a faster allocation than the \emph{Informed} one.

However, the fastest allocation is not necessarily the fairest.
As we see in Figure \ref{subfig:AError}, the \emph{Naive} method is also the one presenting the highest median maximum error across clusters, while the plots of the \emph{Probabilistic} and \emph{Improved} methods are superposed.
The analysis of the empirical cumulative distribution functions (Figure \ref{fig:A2550}) confirms this trade-off between speed and quality of the allocation.

A greedy method (i.e. \emph{Naive}) is the one ensuring the fastest allocation of the robots to the tasks, since the robots spend less time exploring the environment.
Nevertheless, given the structure of \emph{scenario Uniform} (Figure \ref{fig:scenarioA}), with a central deployment area, equally distant from all the cluster and the uniform distribution of the robot at the beginning of the experiment, relatively high probabilities of satisfying at least the 50\% of the occupation of the cluster (i.e. $o_{50}$) can still be obtained.
It should be noted that, by looking at the \emph{uniformity} of the allocation, even on the simplest scenario, methods \emph{Probabilistic} and \emph{Informed} attain a smaller value of $e_{\max}$ faster than the \emph{Naive} one, but finally, all the curves converge to the value of $3$.

\subsection{Scenario Biased}

Figure \ref{fig:BMaxErrorRobots} gives additional evidence in support of the hypothesis of the existence of a trade-off between \emph{speed} and \emph{allocation quality}.

Again, the \emph{Naive} method represents the upper bound in terms of allocation velocity while being the lower bound in terms of maximum median allocation error across cluster.

By looking at Figures \ref{subfig:BError} and \ref{subfig:BRobots}, no remarkable differences can be seen concerning the performances of \emph{Probabilistic} and \emph{Informed} methods.

With respect to \emph{scenario Uniform} (Figure \ref{fig:AMaxErrorRobots}), we can observe that, in \emph{scenario Biased}, more time is required to reach the same levels of robots allocation error.
We explain this shift in the curves with the differences in the nature of the environment, more precisely the placement of cluster $2$.
While in \emph{scenario Uniform} it is as distant as all the other clusters from the deployment area, in \emph{scenario Biased} it is the farthest one.
As a consequence, more time is required to navigate through the arena to actually reach tasks belonging to the second cluster, before eventually deciding to allocate to them.

In addition, we suppose that the bias in the positioning of the clusters could be a potential cause of the highest median maximum error across cluster $e_{\max}$ reached by the \emph{Naive} method at the end of the experiment.

In Figure \ref{fig:B2550}, we can observe that the \emph{Naive} method is able to reach the 25\% and 50\% allocation levels in a smaller number of trials with respect to the other methods.
Moreover, whenever those levels are reached, the time required to reach these allocation thresholds is higher than both the \emph{Probabilistic} and \emph{Informed} methods.

Thus, we can conclude that the performances of the \emph{Naive} method are definitely worse than those of the enhanced methods on \emph{scenario Biased}.

\subsection{Scenario Corridor}

Figure \ref{fig:CMaxErrorRobots} highlights the relative improvements the introduction of probabilistic mechanisms and the use of odometry have brought to the \emph{Naive} method.

In Figure \ref{subfig:CRobots} we can see that the \emph{Naive} method is slower than the \emph{Probabilistic} and \emph{Informed} one and it is not able to reach the complete allocation of the robots to the tasks.
On the contrary, the enhanced methods are able to attain this objective, with the \emph{Informed} one being the fastest.

Although there is an improvement in the velocity of the allocation, it should be noted that the median maximum error across clusters, even for the \emph{Informed} method, converges to a value considerably higher with respect to the optimal one.

Furthermore, the time needed to achieve the complete allocation of the robots is more than four times higher than the one of \emph{scenario Biased}.
Since this delay is observed in all the methods, we take it as an evidence supporting our hypothesis of increasing difficulty of the scenarios.

Figure \ref{fig:C2550} confirms the better overall performances of the \emph{Informed} method.
In Figure \ref{subfig:C25} the steeper increase of the empirical cumulative density function shows that the 25\% allocation level is reached more often and in less time than the other methods.
While the same final probability value is reached by the \emph{Probabilistic} method as well, here the \emph{Naive} methods displays its limits.
Nevertheless, Figure \ref{subfig:C50} attests that the 50\% allocation level is rarely reached by all the methods, confirming the impossibility to achieve a uniform allocation in this scenario.

\begin{figure}[H]
\myfloatalign
\subfloat[Median $e_{\max}(t)$. The dashed grey line represents the optimal allocation error $e_{opt}$.
The differences among the curves are statistically significant. (Wilcoxon signed-rank test, p << 0.05).]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/A.Median}.pdf}
\label{subfig:AError}}\\ 
\subfloat[Median $R(t)$.
The dashed grey line represents the total number of robots in the experiment.
The differences among the curves are statistically significant. (Wilcoxon signed-rank test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/A.Robots.Median}.pdf}
\label{subfig:ARobots}} 
\caption[Median values for the maximum error across clusters $e_{\max}(t)$ and the number of allocated robots $R(t)$ on the scenario Uniform.]{Median values for the maximum error across clusters $e_{\max}(t)$ and the number of allocated robots $R(t)$ for each of the three methods on the scenario Uniform on 50 trials of 1000 $s$ (10000 simulation steps) each. Each trial is performed with $20$ robots and $25$ available tasks.
}\label{fig:AMaxErrorRobots}
\end{figure}

\begin{figure}[H]
\myfloatalign
\subfloat[Allocation level: 25\%.The differences among the curves are statistically significant. (Mann-Whitney test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/A.25}.pdf}
}\\ 
\subfloat[Allocation level: 50\%. The differences among the curves are statistically significant. (Mann-Whitney test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/A.50}.pdf}} 
\caption[Empirical cumulative density functions for the allocation levels $o_{x}$ $x\in{0.25,0.50}$ distributions of the three methods on the scenario Uniform.]{Empirical cumulative density functions for the allocation levels $o_{x}$ $x\in{0.25,0.50}$ distributions of the three methods on the scenario Uniform on 50 trials of 1000 $s$ (10000 simulation steps) each. Each trial is performed with $20$ robots and $25$ available tasks.
}\label{fig:A2550}
\end{figure}



\begin{figure}[H]
\myfloatalign
\subfloat[Median $e_{\max}(t)$.Only differences among the \emph{Naive} and \emph{Probabilistic} and \emph{Naive} and \emph{Informed} curves are statistically significant. (Wilcoxon signed-rank test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/B.Median}.pdf}
\label{subfig:BError}}\\ 
\subfloat[Median $R(t)$.Only differences among the \emph{Naive} and \emph{Probabilistic} and \emph{Naive} and \emph{Informed} curves are statistically significant. (Wilcoxon signed-rank test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/B.Robots.Median}.pdf}
\label{subfig:BRobots}} 
\caption[Median values for the maximum error across clusters $e_{\max}(t)$ and the number of allocated robots $R(t)$ on the scenario Biased.]{Median values for the maximum error across clusters $e_{\max}(t)$ and the number of allocated robots $R(t)$ for each of the three methods on the scenario Biased on 50 trials of 1000 $s$ (10000 simulation steps) each.
Each trial is performed with $20$ robots and $25$ available tasks.
}\label{fig:BMaxErrorRobots}
\end{figure}

\begin{figure}[H]
\myfloatalign
\subfloat[Allocation level: 25\%. The differences among the curves are statistically significant. (Mann-Whitney test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/B.25}.pdf}}\\ 
\subfloat[Allocation level: 50\%. The differences among the curves are statistically significant. (Mann-Whitney test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/B.50}.pdf}} 
\caption[Empirical cumulative density functions for the allocation levels $o_{x}$ $x\in{0.25,0.50}$ distributions of the three methods on the scenario Biased.]{Empirical cumulative density functions for the allocation levels $o_{x}$ $x\in{0.25,0.50}$ distributions of the three methods on the scenario Biased on 50 trials of 1000 $s$ (10000 simulation steps) each.
Each trial is performed with $20$ robots and $25$ available tasks.
}\label{fig:B2550}
\end{figure}


\begin{figure}[H]
\myfloatalign
\subfloat[Median $e_{\max}(t)$. The differences among the curves are statistically significant. (Wilcoxon signed-rank test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/C.Median}.pdf}
\label{subfig:CError}}\\ 
\subfloat[Median $R(t)$. The differences among the curves are statistically significant. (Wilcoxon signed-rank test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/C.Robots.Median}.pdf}
\label{subfig:CRobots}} 
\caption[Median values for the maximum error across clusters $e_{\max}(t)$ and the number of allocated robots $R(t)$ on the scenario Corridor.]{Median values for the maximum error across clusters $e_{\max}(t)$ and the number of allocated robots $R(t)$ for each of three methods on the scenario Corridor on 50 trials of 1000 $s$ (10000 simulation steps) each. Each trial is performed with $20$ robots and $25$ available tasks.
}\label{fig:CMaxErrorRobots}
\end{figure}


\begin{figure}[H]
\myfloatalign
\subfloat[Allocation level: 25\%. The differences among the curves are statistically significant. (Mann-Whitney test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/C.25}.pdf}
\label{subfig:C25}}\\ 
\subfloat[Allocation level: 50\%. The differences among the curves are statistically significant. (Mann-Whitney test, p << 0.05)]
{
\includegraphics[width=0.4\textheight,keepaspectratio]{{./Figures/C.50}.pdf}
\label{subfig:C50}} 
\caption[Empirical cumulative density functions for the allocation levels $o_{x}$ $x\in{0.25,0.50}$ distributions of the three methods on the scenario Corridor.]{Empirical cumulative density functions for the allocation levels $o_{x}$ $x\in{0.25,0.50}$ distributions of the three methods on the scenario Corridor on 50 trials of 1000 $s$ (10000 simulation steps) each. Each trial is performed with $20$ robots and $25$ available tasks.
}\label{fig:C2550}
\end{figure}


\section{Scenario difficulty}
\label{sec:diff}
After having discussed the properties of the different methods, we now focus on these of the scenarios, by trying to characterize their difficulty with respect to the achievement of a uniform allocation.

We choose to measure this difficulty in terms of the number of the times that the different clusters are seen by the robots, following a simple intuition: the higher the number of times a cluster is viewed, the higher the likelihood a robot will decide to allocate itself to the cluster.
This naturally implies that, whenever there is a remarkable difference in the number of views of the clusters, it would be difficult to achieve a uniform allocation, since there exists a bias towards one or more of the clusters.

Moreover, the distribution of the views across the clusters depends on the exploration strategy adopted by the different methods.
Since the $v_i(t)$ metric consists of a cumulative sum of the number of views of the cluster, we can assess the effectiveness of the exploration technique of the different methods by evaluating the magnitude of $v_i(t)$. 

\subsection{Scenario Uniform}

The assumption of the simplicity of the scenario we made during the design phase is confirmed by both Figure \ref{fig:AClusters} and Table \ref{tab:sumvisitsA}.
For all the methods, we can observe a quasi-uniform distribution of the median values of the views at time $t=10000$ to the clusters, for every method.
Regarding the number of times the cluster have been seen, the \emph{Naive} method is the one having the most limited variability across clusters.
Concerning the magnitude of the number of views, we can observe a non-negligible difference among the enhanced methods (\emph{Probabilistic} and \emph{Informed}) with respect to the \emph{Naive} one.

\subsection{Scenario Biased}

The \emph{scenario Biased} is characterized by having cluster $1,3,4$ closer to the deployment area than cluster $2$.
This evident cluster distribution bias is confirmed by the magnitude of the median number of views, as shown in Table \ref{tab:sumvisitsB}, which, for all the methods, is similar for the three closer clusters and completely different from that of the isolated one.
The difference is clearly displayed in Figure \ref{fig:BClusters}.
Moreover, the \emph{Naive} method has a median number of views of the cluster $2$ which is ten times smaller than those of the \emph{Probabilistic} and \emph{Informed} ones.
We believe that the systematic ignorance of the isolated cluster arises from the greedy allocation strategy without probabilistic redistribution mechanisms adopted by the first method.
Given the nature of the \emph{Naive} method, in fact, the robots will try to go past clusters $1,3,4$ only when their requests will be completely satisfied.


\subsection{Scenario Corridor}

The distinctive feature of \emph{scenario Corridor} is the rectangular shape of the arena, with a slightly smaller area than the other two.
The cluster are arranged linearly in descending order from the deployment area: the cluster having the greater id is the closest one.

The narrower profile of the arena has a strong impact on the magnitude of the median number of views of the clusters (Table \ref{tab:sumvisitsC}), which, for example, for the \emph{Naive} method, on cluster $4$, becomes ten times bigger than the corresponding value in \emph{scenario Uniform}.

At a first glance, the fact that cluster $3$ has a greater number of views than cluster $4$ for all the methods seems surprising.
A possible justification for this behavior is that the closer cluster is the one having the higher likelihood to be seen and also the higher likelihood to become fully occupied in a short time.
As shown in Figure \ref{fig:CClusters}, once its request has been satisfied, the cluster rebounds the robots elsewhere in the environment, thus favoring the exploration of other clusters.

Also, the method having the higher number of views on the clusters closer to the deployment area is the \emph{Naive} one, while $v_1(t)$ (the farthest one) is the smallest one.
This can be explained by the fact that the \emph{Naive} method do not possess any mechanism to perform an efficient exploration of the environment.
For this reason, especially in a narrow environment like \emph{scenario Corridor}, it is likely that a robot will keep viewing the clusters that it has already decided to leave, thus increasing the number of views without actually allocating. 


\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth,keepaspectratio]{{./Figures/A.Clusters}.pdf}
\caption[Median value of $v_i(t)$ for $i\in\{1,\cdots,4\}$ on scenario Uniform]{Median value of $v_i(t)$ for $i\in\{1,\cdots,4\}$ on scenario Uniform across 50 trials of 10000 time steps each. 
Each trial is performed with $20$ robots and $25$ available tasks.
The cluster disposition can be seen in Figure \ref{fig:scenarioA}.
The differences among the curves are statistically significant. (Wilcoxon signed-rank test, p << 0.05)}
\label{fig:AClusters}
\end{figure}


\begin{table}[H]
\centering
\begin{tabular}{l c c c c}
\toprule % Top horizontal line
& \multicolumn{4}{c}{\spacedlowsmallcaps{Cluster}} \\ % Amalgamating several columns into one cell is done using the \multicolumn command as seen on this line
\cmidrule(l){2-5} % Horizontal line spanning less than the full width of the table - you can add (r) or (l) just before the opening curly bracket to shorten the rule on the left or right side 
\tableheadline{Method} & 1 & 2 & 3 & 4 \\
\midrule
\midrule 
\tableheadline{Naive} & 5650 & 4673 & 5464 & 5070 \\
\tableheadline{Probabilistic} & 7866 & 5942 & 7420 & 6716 \\
\tableheadline{Informed} & 7412 & 7198 & 8734 & 6481  \\ 
\midrule 
\bottomrule 
\end{tabular}
\caption[Summary of the median values of cluster views $v(t)$ at $t=10000$ in scenario Uniform.]{Summary of the median values of cluster views $v(t)$ at $t=10000$ in scenario Uniform.
Median values are computed across 50 trials of 10000 time steps each.
Each trial is performed with $20$ robots and $25$ available tasks.}
\label{tab:sumvisitsA}
\end{table}

\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth,keepaspectratio]{{./Figures/B.Clusters}.pdf}
\caption[Median value of $v_i(t)$ for $i\in\{1,\cdots,4\}$ on scenario Biased]{Median value of $v_i(t)$ for $i\in\{1,\cdots,4\}$ on scenario Biased across 50 trials of 10000 time steps each.
Each trial is performed with $20$ robots and $25$ available tasks.
The cluster disposition can be seen in Figure \ref{fig:scenarioB}. The differences among the curves are statistically significant. (Wilcoxon signed-rank test, p << 0.05)}
\label{fig:BClusters}
\end{figure}

\begin{table}[H]
\centering
\begin{tabular}{l c c c c}
\toprule % Top horizontal line
& \multicolumn{4}{c}{\spacedlowsmallcaps{Cluster}} \\ % Amalgamating several columns into one cell is done using the \multicolumn command as seen on this line
\cmidrule(l){2-5} % Horizontal line spanning less than the full width of the table - you can add (r) or (l) just before the opening curly bracket to shorten the rule on the left or right side 
\tableheadline{Method} & 1 & 2 & 3 & 4 \\
\midrule
\midrule 
\tableheadline{Naive} & 13805 & 498 & 13231 & 11854 \\ 
\tableheadline{Probabilistic} & 13390 & 4118 & 14405 & 12924 \\ 
\tableheadline{Informed} & 12299 & 4726 & 11242 & 10457 \\ 
\midrule 
\bottomrule 
\end{tabular}
\caption[Summary of the median values of cluster views $v(t)$ at $t=10000$ in scenario Biased.]{Summary of the median values of cluster views $v(t)$ at $t=10000$ in scenario Biased.
Median values are computed across 50 trials of 10000 time steps each.
Each trial is performed with $20$ robots and $25$ available tasks.}
\label{tab:sumvisitsB}
\end{table}



\begin{figure}[H]
\centering
\includegraphics[width=0.9\linewidth,keepaspectratio]{{./Figures/C.Clusters}.pdf}
\caption[Median value of $v_i(t)$ for $i\in\{1,\cdots,4\}$ on scenario Corridor]{Median value of $v_i(t)$ for $i\in\{1,\cdots,4\}$ on scenario Corridor across 50 trials of 10000 time steps each.
Each trial is performed with $20$ robots and $25$ available tasks.
The cluster disposition can be seen in Figure \ref{fig:scenarioC}.
The differences among the curves are statistically significant. (Wilcoxon signed-rank test, p << 0.05)}
\label{fig:CClusters}
\end{figure}

\begin{table}[H]
\centering
\begin{tabular}{l c c c c}
\toprule % Top horizontal line
& \multicolumn{4}{c}{\spacedlowsmallcaps{Cluster}} \\ % Amalgamating several columns into one cell is done using the \multicolumn command as seen on this line
\cmidrule(l){2-5} % Horizontal line spanning less than the full width of the table - you can add (r) or (l) just before the opening curly bracket to shorten the rule on the left or right side 
\tableheadline{Method} & 1 & 2 & 3 & 4 \\
\midrule
\midrule 
\tableheadline{Naive} & 4363 & 28015 & 73242 & 57410 \\ 
\tableheadline{Probabilistic} & 7247 & 23554 & 62802 & 49276 \\ 
\tableheadline{Informed} & 7516 & 18148 & 47106 & 39893  \\ 
\midrule 
\bottomrule 
\end{tabular} 
\caption[Summary of the median values of cluster visits $v(t)$ at $t=10000$ in scenario Corridor.]{Summary of the median values of cluster visits $v(t)$ at $t=10000$ in scenario Corridor.
Median values are computed across 50 trials of 10000 time steps each.
Each trial is performed with $20$ robots and $25$ available tasks.}
\label{tab:sumvisitsC}
\end{table}


