% Chapter X

\chapter{Methodology} % Chapter title
\label{ch:meth} % For referencing the chapter elsewhere, use \autoref{ch:name} 

The \nameref{ch:meth} chapter contains an accurate description of the methodology that we adopted for development of our methods.

We begin by giving a formal definition to the allocation problem we are going to tackle (\nameref{sec:probstat}).

In section \nameref{sec:exsetup}, we describe how we have translated the problem statement into a physical implementation of the problem.

Then, in \nameref{sec:meth}, we present our contribution using a top-down approach.
We start by presenting an \nameref{subsec:overview} of underlying idea of the methods.
Lastly, we conclude by characterizing our methods: \nameref{subsec:naive}, \nameref{subsec:prob} and \nameref{subsec:inf}. 

% Contextualize my work: Differences from the last article -> The redistribution must be done in real-time without global information concerning the topology of the environment or the transition rates.
%Explain motivations behind our choices -> Modularity of the approach and fast prototyping

%----------------------------------------------------------------------------------------

\section{Problem statement}
\label{sec:probstat}
The class of problems we are interested in is the one related to the allocation of a swarm of robots to spatially distributed tasks.

We speak of class since there are several different features in the problem statement that may yield to different problem instances.
Generally speaking, the problem can be described as: 
\begin{definition}
Given $n$ robots and $m$ tasks having a precise spatial distribution, determine a mapping of the robots to the tasks which optimizes a given allocation metric.
\end{definition}

Clearly, the parameters of this general definition are: the \emph{number} of \emph{robots}, the \emph{number} of \emph{tasks}, the \emph{distribution} of \emph{tasks} and the \emph{allocation metric}.

Moreover, the \emph{relation} between the \emph{number} of \emph{robots} and \emph{tasks} is key factor in the problem statement, strictly related to the allocation metric.

For example, in the scenario where the number of robots is greater or equal than the number of tasks ($n \ge m$), if the experiment duration is sufficiently long, the allocation of all the agents will be eventually reached.
There, it could be interesting to analyze the speed with which the complete allocation is achieved.

On the other hand, if the number of tasks is greater than the number of agent ($n<m$), the allocation metric could measure how evenly are the robots spread over the task in space or how fast the swarm could reach some allocation "milestones" (e.g. 25\%, 50\%, 75\% of the tasks).

Regarding the \emph{spatial distribution} (in a two-dimensional space) we could foresee two possible categories of problems: \emph{random} and \emph{deterministic} distribution.

With a \emph{random} distribution, the positions of the tasks are determined by a bivariate random distribution (e.g. normal or uniform).

Conversely, a \emph{deterministic} distribution consist of a precise disposition of the tasks in space, as to form a \emph{grid} or \emph{clusters}. 

In the light of this classification, we can give a precise definition of our problem:

\begin{definition}
Given $n$ robots and $m$ tasks ($n<m$) clustered in space, determine a mapping of the robots to the tasks which is as uniform as possible across clusters.
\label{def:problem}
\end{definition}

We assume that the values of $n$ and $m$ remain constant for all the duration of the experiment.
Furthermore, robots are assumed to be \emph{identical} among them and tasks are considered \emph{homogeneous} (i.e. there are no difference among tasks in terms of required skills to be performed) and \emph{independent} of each other (i.e. there are no relations among the tasks).

The allocation of a robot to a task automatically prevents the allocation of other robots to the tasks.
As a consequence, any robot could perform any available task.

In addition, the $m$ tasks to be performed are distributed across $c$ clusters in space.
For each cluster $i$, we can define the \emph{request} $r_i$ and the \emph{occupation} $o_i(t)$.

The \emph{request} $r_i$ consists of the fraction of the total number of tasks $m$ that belongs to the cluster $i$.

The \emph{occupation} $o_i(t)$ corresponds to the number of tasks of the cluster $i$ that are being performed by a robot at time $t$.

Consequently, we have $m < r_i <= o_i(t), \forall$ $i,t$. 

Given the two measures, we can compute a third one, the \emph{error}, which will serve as a measure for the quality of the allocation.
\begin{equation}
e_i(t) = r_i - o_i(t)
\end{equation} 
Indeed, if a cluster request will be completely satisfied, the error will be null.
A thorough discussion of the measures of allocation quality is made in the \nameref{ch:results} chapter.

\section{Experimental setup}
\label{sec:exsetup}

As described in definition \ref{def:problem}, our problem consist in allocating robots to tasks that are distributed in space but grouped in a limited number of clusters.
In our experimental setup, the tasks are represented through \acs{TAM}s and a cluster consists of a circular arrangement of tasks.
Moreover, the cluster is able to broadcast information concerning itself (as shown in Figure \ref{fig:cluster}), namely its id $i$, the number of requests $r_i$ and its current occupation $o_i$, in a limited surrounding area.
In the simulation, the information transfer occurs only when the robot is within a circular range (radius: $51$cm) around the cluster.
Although this cluster-to-robot communication is only performed in simulation, this operation could be easily implemented with real \acs{TAM}s and real robots by means of a local communication device such as the \acl{RAB} board by \cite{gutierrez2008open}. 
The shape of the cluster has been specifically designed to give the robots the possibility to navigate around it, either to assess the cluster occupation or to direct to an available task. 

\begin{figure}[H]
\centering
\begin{tikzpicture}[scale=4]
\draw [dashed,fill=blue!20] (0,0) circle (.75cm);
\foreach \x in {45,90,...,360}
    {
    \draw (0:0.42cm) [rotate=\x,fill=black!20,very thick] +(-.12,-.12) rectangle ++(.12,.12);
    %\draw (\x:0.5cm) node{\x};
    }
\end{tikzpicture}
\caption[Schematic representation of the TAM disposition in a cluster]{Schematic representation of the TAM disposition in a cluster. 
Each square box represent a single TAM entity.
The blue circle corresponds to the area ($r$=51cm, in our experiments) within which the robots are able to sense informations concerning the cluster.}\label{fig:cluster}
\end{figure}

In our experimental setup we are going to use 4 clusters, each one composed by 8 \acs{TAM}s.
The requests $r_i$ of the clusters will be distributed as follows:
\begin{table}[H]
\myfloatalign
\begin{tabularx}{0.5\textwidth}{ccc} \toprule
\tableheadline{Cluster} & \tableheadline{TAMs}
& \tableheadline{Requests} \\ \midrule \midrule
  1 & 8 & 7 \\ \midrule
  2 & 8 & 5 \\ \midrule
  3 & 8 & 8 \\ \midrule
  4 & 8 & 5 \\ \midrule \midrule
 & \tableheadline{Total} & 25 \\ \midrule
 \bottomrule
\end{tabularx}
\caption[Clusters requests in Uniform,Biased and Corridor experiment setups]{Clusters request in \nameref{subsec:A}, \nameref{subsec:B}, \nameref{subsec:C} experiment setups.}  
\label{tab:requests}
\end{table}

Given the technical equipment of the \acs{TAM} (Table \ref{tab:tamspec}), we decided to make use of the RGB \acs{LED}s to make the internal state of the device detectable by the robots, as shown in Figure \ref{fig:TAMstates}.

\begin{figure}[H]
\begin{tikzpicture}[shorten >=1pt,node distance=4cm,on grid,auto]
   \node[state,thick,draw=green!75,fill=green!20,] (Av)   {Available}; 
	\node[state,initial,thick,draw=orange!75,fill=orange!20,] (Dis) [left=of Av]   {Disabled};   
   \node[state,thick,draw=red!75,fill=red!20] (Oc) [right=of Av] {Occupied}; 
   %\node[state,thick,draw=yellow!75,fill=yellow!20] (Un) [right=of Av] {Unavailable};
    \path[->] 
    (Av) edge node [text width=1.5cm] {Sense Robot} (Oc)
	(Dis) edge node  {Enabled} (Av);
    %(Oc) edge [bend left]  node  {$T_w$ expired} (Un)
    %(Un) edge [bend left] node [left]  {Robot not sensed} (Av);
\end{tikzpicture}
\caption[Finite state machine representing the \acs{TAM} states]{Finite state machine representing the \acs{TAM} states. The color of the states corresponds to the actual color displayed by the \acs{TAM} \acs{LED}s.}
\label{fig:TAMstates}
\end{figure}

At the beginning of the experiment all the \acs{TAM}s are initialized in the \emph{Disabled} state.
Then, a number of \acs{TAM}s corresponding to the requests $r_i$ defined in Table \ref{tab:requests} is enabled in each cluster, making the device \emph{available}.
Finally, when a robot is sensed in the \acs{TAM} by means of the light barrier, the device becomes \emph{Occupied}.
Clearly, in each cluster $i$, $8-r_i$ cluster will remain disabled, thus unaccessible by the robots. 

We decided to represent the tasks as \emph{sporadic} (i.e. not occurring periodically), \emph{atomic} (i.e. they cannot be suspended and later resumed) in order to test the capability of the swarm to dynamically adapt to changes in configurations instead of learning periodic patterns.

As we need to have a \emph{number of tasks} \emph{greater} than the \emph{number of robots} (i.e. $n<m$), we will use 20 \emph{e-pucks} randomly deployed in a predefined area of the environment.
The initial position of the robots will be determined by drawing the $x$ and $y$ coordinates of the robot from a uniform distribution within the range depicted.
It should be noted that, given their technical specifications, the chosen number of robots is not sufficient to perform a complete coverage of the environment, thus requiring the \emph{e-pucks} to explore it.


\section{Methods}
\label{sec:meth}
Unlike many methods in \acl{SR}, ours are not inspired by any natural phenomenon.
Instead, simplicity was the principle that guided our development.

We started by focusing ourselves on the less complex (i.e. \nameref{subsec:naive}) method that achieved a reasonable allocation of the robots.

Once that has been found, we incrementally built a second method (\nameref{subsec:prob}) upon the basic one, trying to devise specific measures to overcome its limitations.

Lastly, we tried to further improve the second method by adding navigation information to it (\nameref{subsec:inf}).

All the methods are based on the \emph{sense}, \emph{think}, \emph{act} paradigm.
At each simulation step, the robot first collects the information gathered through the \emph{sensors}, then, according to its internal controller, determine the values to transmit to the actuators.
Here, for the sake of clarity, we focus only on the \emph{think} phase, giving a brief description of the robot controllers.

Among the different behavior-based developing approaches described in \nameref{subsubsec:devmeth}, we decided to use the probabilistic finite state machine one.
Finite state machines allow to decouple a complex problem in several easier sub-problems, which can be tackled separately and independently of each other.
Furthermore, they provide an elegant and clear representation of the robot controller, in terms of internal states of the robot and transitions, based on both the state and the currently sensed values of the robot.

In order to operate correctly, all the methods require the use of a minimal set of sensors and actuators: the \emph{wheels} to make the robots move in the environment, the \emph{omni-directional camera} to localize the tasks, the \emph{proximity sensors} to perform obstacle avoidance and the \emph{range and bearing} board to receive the informations transmitted by the cluster.
In addition, the \nameref{subsec:inf} method requires the use of the encoder sensors on the wheels, as described in the homonym section.


\subsection{Overview}
\label{subsec:overview}
\begin{figure}[H]
\begin{tikzpicture}[shorten >=1pt,node distance=5.5cm,on grid,auto] 
   \node[state,initial] (Ex)   {Exploration}; 
   \node[state] (As) [below right=of Ex] {Assessing cluster}; 
   \node[state,accepting] (Uw) [below left=of Ex] {Allocation};
   \node[state] (De) [below left=of As] {Decision};
    \path[->] 
    (Ex) edge [bend left]  node  {Sense task} (As)
    (As) edge [bend left]  node  {Within information sensing range} (De)
    (De) edge node [below,rotate=90]  {Request satisfied $\vee$ Not allocate} (Ex) 
         edge [bend left] node {Allocate} (Uw);
\end{tikzpicture}
\caption[Finite state machine representing the \emph{e-puck} behavioral rules]{Finite state machine representing the \emph{e-puck} behavioral rules}\label{fig:FSMLogicalEpuck}
\end{figure}

Figure \ref{fig:FSMLogicalEpuck} summarizes the high-level behavior implemented in our methods.

All the robots are initially placed in a common deployment area, where they start their \emph{Exploration} of the environment.
As soon as they visually detect (i.e \emph{sensed}) the cluster by means of their camera, the robots direct themselves towards the cluster in order to collect information, thus performing a \emph{cluster assessment}.
When the information has been gathered, the robots enter the \emph{decision} phase.
In the case of a positive outcome of the \emph{decision} phase, the robot decides to allocate itself to the task, thus completing its duty.
Otherwise, the robots goes back to the \emph{exploration} phase to find another available activity to perform.

As we discussed in section \nameref{sec:sa}, our problem can be decoupled into two sub-problems: \emph{task localization} and \emph{task allocation}.

The \emph{task localization} problem is addressed in the \emph{Exploration} state, whereas the \emph{task allocation} is performed in the \emph{Assessing cluster}, \emph{Decision} and \emph{Allocation} states.

Instead of implementing collective navigation techniques, such as area coverage or chain formation, we chose to perform the \emph{Exploration} phase with a random walk.

Our choice was made taking into account the advantages offered by such a method: its simplicity, its minimal requirements in terms of sensors and computational resources and the absence of bias towards some preferred directions.
The \nameref{subsec:naive} and \nameref{subsec:prob} methods implements an uninformed version of random walk, while the \nameref{subsec:inf} one makes use of odometric information to guide the exploration.

The \emph{decision} mechanism for the allocation is the key feature that distinguishes the different methods.

The \nameref{subsec:naive} method applies a greedy allocation rule: as soon as an available task has been detected, the robot tries to allocate to it.

On the other hand, the \nameref{subsec:prob} and \nameref{subsec:inf} methods introduce an actual probabilistic decision phase, prior to the allocation.

Every time, during the assessment phase, that a change in the current cluster occupation is sensed, a stochastic decision mechanism is triggered.
The robot decides whether to leave or not the cluster with a probability equal to the cluster's current relative occupation (i.e. $\frac{o_i(t)}{r_i}$).
Through this simple decision rule, we would like to prevent a concentration of the robots on a single cluster and stimulate a more uniform allocation.

In addition to this probabilistic rule, in every method, the decision to leave the cluster is taken anytime a robot detects that the cluster current occupation equals the requests (i.e. $o_{i}(t) = r_i$ for the cluster $i$ being assessed) or, in other words, when the cluster request is satisfied.

Whenever the decision to leave is taken, the robot enters a temporary blind state (not depicted on the state machine).
The robot remains in this state for 100 time steps, during which it ignores the readings coming from the camera.
This simple mechanism has been developed to prevent a robot from being attracted by a cluster that it has just decided to leave, without resorting to more sophisticated solutions (e.g. odometry).


%------------------------------------------------

\subsection{Naive}
\label{subsec:naive}
\begin{figure}[H]
\hbox{\hspace{-1.5cm}
\begin{tikzpicture}[scale=0.4,shorten >=1pt,node distance=5cm,on grid,auto] 
   \node[state,thick,initial] (LW)   {Linear Walk}; 
   \node[state,thick] (RT) [right=of LW] {Random Turn}; 
   \node[state,thick] (DtoTAM) [above=of LW] {Directing to TAM};
   \node[state,thick] (AsClu) [right=of DtoTAM] {Assess cluster};
   \node[state,thick,draw=red!75] (In) [left=of DtoTAM] {In TAM};
   \node[state,accepting,thick,draw=red!75] (Wo) [below=of In] {Perform Task};
   %\node[state,thick,draw=red!75] (Ex) [below left=of Wo] {Exit TAM};
    \path[->] 
    (LW) edge [bend left]  node  {Obstacle} (RT)
         edge [bend left]  node [left,text width=2cm]  {Available $\vee$ Occupied TAM} (DtoTAM)
    (RT) edge  node  {Turn end} (LW)
    (DtoTAM) edge  node [above right,text width=1.5cm,rotate=-45]  {Lost TAM} (RT) 
        edge node [above, text width=2cm] {$d_b < d_{in}$ $\wedge$ Available TAM} (In)
        edge node [below, text width=2cm]{$d_b < d_{as}$ $\wedge$ Occupied TAM } (AsClu)
    (AsClu) edge [bend right] node [above] {Available TAM} (DtoTAM)
            edge [bend left] node  {$o_i = r_i$} (RT)
    (In) edge node [left,text width=2cm]   {Occupied TAM} (Wo)
         %edge [bend right] node [above, rotate=45]   {$o_i == r_i$ $\wedge$ Y} (Ex)
    %(Wo) edge [bend right] node [right]  {End Work} (In)
    %(Ex) edge [bend right] node [below,rotate=15]  {Out of TAM} (RT)
    ;
\end{tikzpicture}}
\caption[Deterministic finite state machine corresponding the implemented individual robot controller for the \emph{Naive} method]{Deterministic finite state machine corresponding the implemented individual robot controller for the \emph{Naive} method. The red circles corresponds to the states where the robot has light up the red \acs{LED}s on its body.}\label{fig:FSMNaive}
\end{figure}

Figure \ref{fig:FSMNaive} provides a detailed description of the robot controller for the \emph{Naive} state, based on the high level description depicted in Figure \ref{fig:FSMLogicalEpuck}.

Here, the uninformed \emph{random walk} is performed by combining two simple behaviors: \emph{Linear walk} and \emph{Random turn}.
\emph{Linear walk} consists of letting the robot move in a straight line, i.e actuating the same speed on both the wheels.
As soon as the proximity sensors detect the presence of an obstacle (within a range of $15$ cm), the robots enter the \emph{Random turn} state.
A \emph{random turn} is done by first choosing a rotation direction (clockwise or counterclockwise) and then pivot for a random number of time steps. 

The robot is initialized in the \emph{Linear walk} state, but as soon as a task (both available or occupied) is detected, the robot \emph{directs} itself towards it.
As explained in Section \ref{sec:exsetup}, the robot is able to determine the state of a task by perceiving the corresponding color with its omni-directional camera within a range of $50$ cm.
In our simulation, the resulting readings from the omni-directional camera are colored points, characterized by their color, their distance and angle with respect to the direction where the robot is heading.
Thanks to this information, the robot can easily rotate and head in the same direction as the sensed \acs{TAM}.
As soon as an enabled \acs{TAM}, regardless of its internal state, is detected by means of the camera, the robots enters the \emph{Directing to TAM} state, and starts moving in the direction of the perceived color point.
The sensed \acs{TAM} could be either \emph{available} or already \emph{occupied}.
Since the \emph{Naive} allocation rule is \emph{greedy}, if the abstracted task is available, the robot directs towards it.

When the distance of the color point $d_b$ it perceives is smaller than the \acs{TAM} depth ($d_{in}$=10.83 cm) the robot lights up the red \acs{LED}s on its body to prevent other robots from allocating to the same task and enters the \emph{In TAM} state, stopping inside the \acs{TAM}.
Once the \acs{TAM} has detected the presence of the \emph{e-puck} by means of its light barrier, it signals the change in its internal state by changing the color of its RGB LEDs to red.
This operation consists indeed, in the abstraction of the activity that the \emph{e-puck} should perform.
In response to this state transition, the robot moves from the \emph{In TAM} state to the the final state \emph{Perform Task}.
From the \emph{e-puck} point of view, in our experimental setup, the abstraction of a task consists of remaining idle inside the \acs{TAM}.

On the other hand, if the sensed \acs{TAM} is unavailable, while being in the \emph{Directing to TAM} state, the \emph{e-puck} moves in the direction of the cluster to be able to \emph{assess} its occupation.
When the robot arrives closer to the \acs{TAM} then the assessing distance threshold (i.e. when the distance of the perceived color point $d_b$ is smaller than $d_{as}=25$cm), the robot's internal state changes to \emph{Assessing Cluster}.
The assessment phase has two possible outcomes: either there are still tasks available in the cluster or the cluster request have been satisfied.

In the case of the presence of available \acs{TAM}s, the robot starts navigating around the cluster, performing a circular motion, remaining inside the assessing range, until it perceives a green color point, corresponding to the \emph{available} task.
Then he moves to the \emph{Directing to TAM} state, eventually entering the \acs{TAM} as described above.

Otherwise, if the cluster request have been satisfied (i.e. $o_i(t)=r_i$), the \emph{e-puck} performs a random turn and restart the exploration of the environment, before moving back to the \emph{Linear Walk} state.
   

%------------------------------------------------

\subsection{Probabilistic}
\label{subsec:prob}
\begin{figure}[H]
\hbox{\hspace{-1.5cm}
\begin{tikzpicture}[scale=0.5,shorten >=1pt,node distance=5cm,on grid,auto] 
   \node[state,thick,initial] (LW)   {Linear Walk}; 
   \node[state,thick] (RT) [right=of LW] {Random Turn}; 
   \node[state,thick] (DtoTAM) [above=of LW] {Directing to TAM};
   \node[state,thick] (AsClu) [right=of DtoTAM] {Assess cluster};
   \node[state,thick,draw=red!75] (In) [left=of DtoTAM] {In TAM};
   \node[state,accepting,thick,draw=red!75] (Wo) [below=of In] {Perform Task};
   %\node[state,thick,draw=red!75] (Ex) [below left=of Wo] {Exit TAM};
    \path[->] 
    (LW) edge [bend left]  node  {Obstacle} (RT)
         edge [bend left]  node [left,text width=2cm]  {Available $\vee$ Occupied TAM} (DtoTAM)
    (RT) edge  node  {Turn end} (LW)
    (DtoTAM) edge  node [above right,text width=2cm,rotate=-45]  {Stalemate $\vee$ Lost TAM} (RT) 
        edge node [above, text width=2cm] {$d_b < d_{in}$ $\wedge$ Available TAM} (In)
        edge node [below, text width=2cm]{$d_b < d_{as}$ $\wedge$ Occupied TAM } (AsClu)
    (AsClu) edge [bend right] node [above] {Allocate} (DtoTAM)
        edge [bend left] node [text width=2cm]  {Not allocate $\vee$ Stalemate $\vee$ $o_i = r_i$ $\vee$ Decision} (RT)
    (In) edge node [left,text width=2cm]   {Occupied TAM} (Wo)
         %edge [bend right] node [above, rotate=45]   {$o_i == r_i$ $\wedge$ Y} (Ex)
    %(Wo) edge [bend right] node [right]  {End Work} (In)
    %(Ex) edge [bend right] node [below,rotate=15]  {Out of TAM} (RT)
    ;
\end{tikzpicture}}
\caption[Probabilistic finite state machine corresponding the implemented individual robot controller for the \emph{Probabilistic} method]{Probabilistic finite state machine corresponding the implemented individual robot controller for the \emph{Probabilistic} method.

The red circles corresponds to the states where the robot has turned on the red \acs{LED}s on its body.

\emph{Allocate} and \emph{Not Allocate} transitions represent the two possible outcomes of the probabilistic decision.}\label{fig:FSMProbabilistic}
\end{figure}

Since the \emph{Probabilistic} method (Figure \ref{fig:FSMProbabilistic}) is incrementally built upon the \emph{Naive} one, the general dynamic of the controller is similar to what has been explained in the \nameref{subsec:naive} section.
The behavior of the robot is identical to that of the \emph{Naive} method in the \emph{In TAM}, \emph{Perform Task}, \emph{Linear walk} and \emph{Random turn}.

The \emph{exploration} technique is indeed an uninformed random walk.

The \emph{allocation} rule, on the other hand, is modified in order to address
the two main issues of the \emph{naive} approach: the occurrence of a \emph{stalemate} and the \emph{lack} of an allocation rule to obtain a more \emph{uniform} task allocation.

A \emph{stalemate} may occur whenever two robots decide to allocate themselves to the same task.
In that case, both the robots will start moving towards the task.
If the robots would arrive closer enough (around $30$cm) to the \acs{TAM} at the same time, they will start trying to avoid each other. 
Since there is no direct communication among the robots, there is no possibility of an explicit agreement on which robot should perform the task.

Our solution for this issue has been developed by implementing a counter which is started once the robot enters the \emph{Directing to TAM} or \emph{Assessing Cluster} states and incremented each simulation time step.
When the counter surpass a predefined threshold (100 time steps, in our implementation), the robots decide stochastically whether to wait or leave the cluster.
The probability to leave ($p_l$), by performing a \emph{random turn} and starting a blind exploration, is equal to $0.01$.
The decision process is then repeated at each time step, until either one of the robots decides to leave before the other, thus allowing the remaining one to allocate itself to the task, or both will leave the cluster. 

On the other hand, the \emph{greedy} allocation rule of the \emph{Naive} method is substituted by a probabilistic one.

In the \emph{Probabilistic} method, a robot in the \emph{Linear walk} state still heads towards a \acs{TAM} as soon as it has perceived it, regardless of its state, but behaves differently during the \emph{assessment} phase.

In fact, if the perceived task is directly available, the robot will immediately try to allocate itself to it.

Otherwise, if the robot will enter the \emph{Assessing Cluster} state, the probabilistic decision will be triggered.
Here, the robot will either decide to \emph{allocate} or to \emph{not allocate}.

In the first case, in the same way as in the \emph{Naive} method, the robot will turn around the cluster until the first available task is found.

In the second one, the robot will move to the \emph{Random turn} state, perform a random turn and start a temporary blind exploration.

Moreover, the \emph{decision} phase will occur whenever a change in the occupation of the currently assessed cluster is sensed.

The idea of introducing a stochastic component in the decision rule arises from the objective of achieving a uniform allocation of the robots across the cluster.
One way of doing so, is allowing the robots to move from already crowded clusters to those that are still almost empty, thus balancing the occupation among them.
With this idea in mind, we tried to devise a new allocation rule.

A deterministic rule has been immediately discarded since it lacked of flexibility with respect to differences in the size of the cluster or the number of robots.
If the rule would have been based on an absolute occupation threshold (e.g. leave the cluster if $o_i(t)>X_i$), it would have required a global knowledge of the environment, in order to determine a priori the optimal values to achieve a uniform allocation.
With a rule based on relative occupation (e.g. leave the cluster if $\frac{o_i(t)}{r_i} > X_i)$), in addition to the global knowledge requirement, once the threshold would have been met in all the clusters, it would not have been possible to allocate the remaining robots. 

Thus, we decided to implement a probabilistic rule based on a simple intuition: limiting the allocation of robots to cluster whose occupation is already high.
In order to do so, we propose an abandon probability, computed every time that a robot enters the \emph{decision} phase.
The abandon probability $a_i$ for a certain robot assessing cluster $i$, at time $t$ is defined as:
\begin{equation}
a_i(t) = \frac{o_i(t)}{r_i}
\end{equation}\label{eq:prob}
The occupation $o_i$ is normalized by the cluster request in order to obtain a value in the $[0,1]$ range.

It should be noted that: the higher the number of robots currently being allocated to tasks belonging to the cluster, the higher the likelihood of leaving the cluster.

The advantages of this allocation rule is that it is \emph{completely distributed}, with \emph{minimal} requirements in terms of communication (i.e. only the information on the occupation $o_i(t)$ must be transferred to the robot) and computational capabilities and \emph{flexible} with respect to the different requests of the clusters.

%----------------------------------------------------------------------------------------

\subsection{Informed}
\label{subsec:inf}
\begin{figure}[H]
\hbox{\hspace{-1.5cm}
\begin{tikzpicture}[scale=0.8,shorten >=1pt,node distance=5cm,on grid,auto] 
   \node[state,thick,initial] (LW)   {Linear Walk}; 
   \node[state,thick] (RT) [right=of LW] {Random Turn}; 
   \node[state,thick] (DtoTAM) [above=of LW] {Directing to TAM};
   \node[state,thick] (AsClu) [right=of DtoTAM] {Assess cluster};
   \node[state,thick,draw=red!75] (In) [left=of DtoTAM] {In TAM};
   \node[state,accepting,thick,draw=red!75] (Wo) [below=of In] {Perform Task};
   %\node[state,thick,draw=red!75] (Ex) [below left=of Wo] {Exit TAM};
    \path[->] 
    (LW) edge [bend left]  node [above left,text width=2cm]  {Obstacle $\vee$ Odometry} (RT)
         edge [bend left]  node [left,text width=2cm]  {Available $\vee$ Occupied TAM} (DtoTAM)
    (RT) edge  node  {Turn end} (LW)
    (DtoTAM) edge  node [above right,text width=2cm,rotate=-45]  {Stalemate $\vee$ Lost TAM} (RT) 
        edge node [above, text width=2cm] {$d_b < d_{in}$ $\wedge$ Available TAM} (In)
        edge node [below, text width=2cm]{$d_b < d_{as}$ $\wedge$ Occupied TAM } (AsClu)
    (AsClu) edge [bend right] node [above] {Allocate} (DtoTAM)
        edge [bend left] node [text width=2cm]  {Not allocate $\vee$ Stalemate $\vee$ $o_i = r_i$ $\vee$ Decision} (RT)
    (In) edge node [left,text width=2cm]   {Occupied TAM} (Wo)
         %edge [bend right] node [above, rotate=45]   {$o_i == r_i$ $\wedge$ Y} (Ex)
    %(Wo) edge [bend right] node [right]  {End Work} (In)
    %(Ex) edge [bend right] node [below,rotate=15]  {Out of TAM} (RT)
    ;
\end{tikzpicture}}
\caption[Probabilistic finite state machine corresponding the implemented individual robot controller for the \emph{Informed} method]{Probabilistic finite state machine corresponding the implemented individual robot controller for the \emph{Informed} method.The red circles corresponds to the states where the robot has light up the red \acs{LED}s on its body.}\label{fig:FSMInformed}
\end{figure}

The \emph{Informed} method is built upon the \emph{Probabilistic} one, hence the behavior of the robots in all the states is the same as described in section \nameref{subsec:prob}, including the mechanism to solve the stalemate issue and the probabilistic allocation rule.
With the \emph{uniform allocation} problem being tackled with the probabilistic rule introduced in Equation \ref{eq:prob}, the only difference with the previous method is how the \emph{exploration} is performed.
While visualizing the first experiments with the \emph{Probabilistic} method, we discovered a minor issue in the exploration phase. 
In fact, every time a robot decides to leave a cluster, after having completed the blind exploration phase, there is no guarantee that the it will actually direct towards another cluster without coming back to the previously left one.
Indeed, a robot may assess several times the cluster it has just left due to the lack of available tasks, thus having a redundant and inefficient behavior. 
This could happen as a consequence of the obstacle avoidance with respect to other robots or the arena walls.
Given the sensory equipment of the \emph{e-puck}, we decided to propose a solution to this issue using the odometry.
Odometry is a technique to estimate the change of position of the robot with respect to a known position, using moving sensors.
In our solution, as soon as a robot decides to leave a cluster, it starts keeping track of the position of the cluster $\mathbf{p}$.
This relative localization with respect to the robot is then update at each time step with the information coming from the sensors.

\graffito{A detailed explanation of the model can be found in \protect\cite{lucas2001tutorial}}
\emph{Odometry} is required since the displacement and the rotation of the robot cause its reference frame to move and rotate accordingly.
Thus, if a position of a fixed point is not translated into the new reference frame, its exact position could not be tracked anymore.
Through the readings of the encoder sensors mounted on the wheels it is possible to determine the distance traveled by each wheel of the robot $d_l$ and $d_r$ during each simulation step.
Since the development of our methods has been performed in simulation, there is no error in the readings coming from the sensors.
However, the readings from the sensors on real robots are perturbed by noise.
Here, we propose to model the noise as an additive gaussian noise with mean $0$ and standard deviation $0.2$.
Thanks to these values, knowing the inter-wheel distance of the robot $d_{iw}$, it is possible to estimate the displacement $d_d$ and rotation $\vartheta$ of the robots:
 
\begin{equation}
\begin{aligned}\label{eq:1}
d_d = \frac{d_l + d_r}{2}\\
\vartheta_d = \frac{d_l - d_r}{d_{iw}}
\end{aligned}
\end{equation}

The displacement magnitude and angle can be combined to form a displacement vector:
\graffito{The vectors are expressed in polar coordinates, in the form (\emph{magnitude},\emph{angle}) or, equally $(r,\vartheta)$}.
\begin{equation}\label{eq:2}
\mathbf{d} = (d_d,\vartheta_d)
\end{equation}

The vector is than used to perform the roto-translation required to correctly update of the stored position of the previous cluster $\mathbf{p}$:

\begin{equation}
\begin{aligned}\label{eq:3}
\mathbf{p} & = \mathbf{x} + \mathbf{d}\\
\mathbf{p} & = \begin{bmatrix}
       \cos(\vartheta_d) & -\sin(\vartheta_d) \\
       \sin(\vartheta_d) & \cos(\vartheta_d) \\ 
     \end{bmatrix} \cdot \mathbf{p} = (d_p,\vartheta_p)
\end{aligned}
\end{equation}

By performing the sequence of operations described in Equations \ref{eq:1}, \ref{eq:2}, \ref{eq:3} at each time step it is possible to maintain a reasonable estimate of the position of the most recently left cluster.
We speak of "a reasonable estimate" since the presence of the error in the readings does not allow to precisely track the position $\mathbf{p}$ of the cluster.

This information is used in the \emph{Linear walk} state to perform an informed random walk.
In fact, whenever the cluster estimated orientation $\vartheta_p$ is included in the range $(h-\frac{\pi}{6},h+\frac{\pi}{6})$, with $h$ being the angle of the direction where the robot is currently heading and the cluster estimated distance $d_p$ is smaller than two times the omnidirectional camera range (i.e. $1m$), the robot performs a change of direction.

The change of direction (marked with \emph{Odometry} in Figure \ref{fig:FSMInformed}) is implemented by changing the state of the robot to \emph{Random Turn}.

We believe that, even though our solution could not completely profit from the advantages brought by the use of an exact odometry, the increased occurrence of direction changes due to this mechanism will result in a better exploration of the environment and possibly, in a more even distribution of the robots across clusters.

\subsection{Summary}

\begin{table}
\myfloatalign
\begin{tabularx}{\textwidth}{p{2.5cm}p{3cm}p{4.5cm}} \toprule
\tableheadline{Method} & \tableheadline{Exploration} & \tableheadline{Decision rule (at time $t^*$)} \\ \midrule \midrule
\nameref{subsec:naive} & Uninformed random walk & Greedy.

Leave if at time $t^*$, $r_i(t^*)=o_i(t^*)$. \\ \midrule
\nameref{subsec:prob} & Uninformed random walk & Probabilistic with abandon probability $a_i(t^*)=\frac{o_i(t^*)}{r_i(t^*)}$. 

Probabilistic stalemate prevention rule.\footnotemark[1] 

Leave if at time $t^*$, $r_i(t^*)=o_i(t^*)$.\\ \midrule
\nameref{subsec:inf} & Informed random walk using odometry & Probabilistic with abandon probability $a_i(t^*)=\frac{o_i(t^*)}{r_i(t^*)}$. 

Probabilistic stalemate prevention rule.\footnotemark[2]

Leave if at time $t^*$, $r_i(t^*)=o_i(t^*)$ \\ \midrule
\bottomrule
\end{tabularx}
\caption[Overview of the developed methods]{Overview of the developed methods.}  
\label{tab:overviewmeth}
\end{table}

\footnotetext[1]{\emph{Probabilistic stalemate prevention rule:} After the $100^{th}$ time step spent in directing state, decide every time step whether to leave with probability $p=0.01$.}
\footnotetext[2]{See 1.}

Our starting point was the problem of uniformly allocating robots to spatially distributed tasks (cf. Definition \ref{def:problem})

We clarified this statement by fixing constraints on the number of agents ($20$), the number of tasks ($25$) and the arrangement of the tasks (\emph{4 clusters}).

From this statement, we devised a physical implementation of the tasks, the \acs{TAM} and their arrangement to form clusters (Figure \ref{fig:cluster}).

Once having defined a concrete setup for the problem, we began the development of our methods by means of an iterative process based on simulations.

We developed three robot controllers, presented here as finite state machines, that tackles separately the two sub-problems that characterizes our definition of the problem : \emph{exploration} and \emph{allocation}.

A summary of the relevant features of the solutions we implemented in our methods to tackle these problem is presented in Table \ref{tab:overviewmeth}.

In chapter \ref{ch:results} we present the results we obtained by implementing the the same controller on all the robots in the swarm and launching simulations of 1000s (10000 time steps) each.




