
%% bare_conf.tex
\documentclass[10pt, conference, compsocconf]{IEEEtran}
% Add the compsocconf option for Computer Society conferences.
%
\IEEEoverridecommandlockouts
% If IEEEtran.cls has not been installed into the LaTeX system files,
% manually specify the path to it like:
% \documentclass[conference]{../sty/IEEEtran}

% Some very useful LaTeX packages include:
% (uncomment the ones you want to load)
\usepackage{epsfig}
\usepackage{subfigure}
\usepackage{calc}
\usepackage{amssymb}
\usepackage{amstext}
\usepackage{amsmath}
\usepackage{amsthm}
\usepackage{multicol}
\usepackage{pslatex}
\usepackage{xcolor}

%%%%%%%%%%%%%%%%%%%%%% Miguel's variables %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\newcommand{\Added}[1]{\textcolor{red}{#1}}%red
\newcommand{\AddedIP}[1]{\textcolor{red}{#1}}%red
\newcommand{\feRT}{\sin^2(e_{RT})} %f(e_{RT})
\newcommand{\dfeRT}{\sin(e_{RT})\cos(e_{RT})}
\newcommand{\dfeRTs}{\sin^2(e_{RT})}
\newcommand{\feth}{[1-\cos(e_\theta)]} %g(e_\theta)
\newcommand{\dfeth}{\sin(e_\theta)} %\frac{g'(e_\theta)}{2}
\newcommand{\dfeths}{\sin^2(e_\theta)} %\frac{g'(e_\theta)}{2}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}


\begin{document}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
\title{Adaptive Autonomous Navigation using Reactive Multi-agent System for Control Laws Merging}


% author names and affiliations
% use a multiple column layout for up to two different
% affiliations

\author{\IEEEauthorblockN{Baudouin Dafflon, Franck Gechter, Abderrafiaa Koukam}
\IEEEauthorblockA{IRTES-SET\\
UTBM\\
Belfort, France\\
\{FirstName.Lastname\}@utbm.fr}
\and
\IEEEauthorblockN{Jose Vilca, Lounis Adouane}
\IEEEauthorblockA{Institut Pascal\\
Blaise Pascal University\\
Clermont-Ferrand, France\\
\{FirstName.Lastname\}@univ-bpclermont.fr}
%\thanks{This work was supported by the French National Research Agency (ANR) through the Safeplatoon project.}
}

% conference papers do not typically use \thanks and this command
% is locked out in conference mode. If really needed, such as for
% the acknowledgment of grants, issue a \IEEEoverridecommandlockouts
% after \documentclass

% for over three affiliations, or if they all won't fit within the width
% of the page, use this alternative format:
% 
%\author{\IEEEauthorblockN{Michael Shell\IEEEauthorrefmark{1},
%Homer Simpson\IEEEauthorrefmark{2},
%James Kirk\IEEEauthorrefmark{3}, 
%Montgomery Scott\IEEEauthorrefmark{3} and
%Eldon Tyrell\IEEEauthorrefmark{4}}
%\IEEEauthorblockA{\IEEEauthorrefmark{1}School of Electrical and Computer Engineering\\
%Georgia Institute of Technology,
%Atlanta, Georgia 30332--0250\\ Email: see http://www.michaelshell.org/contact.html}
%\IEEEauthorblockA{\IEEEauthorrefmark{2}Twentieth Century Fox, Springfield, USA\\
%Email: homer@thesimpsons.com}
%\IEEEauthorblockA{\IEEEauthorrefmark{3}Starfleet Academy, San Francisco, California 96678-2391\\
%Telephone: (800) 555--1212, Fax: (888) 555--1212}
%\IEEEauthorblockA{\IEEEauthorrefmark{4}Tyrell Inc., 123 Replicant Street, Los Angeles, California 90210--4321}}


% make the title area
\maketitle

\begin{abstract}
This paper deals with intelligent autonomous navigation of a vehicle in cluttered environment. We present a control architecture for safe and smooth navigation of a Unmanned Ground Vehicles (UGV). This control architecture is designed to allow the use of a single control law for different vehicle contexts (attraction to the target, obstacle avoidance, etc.). The reactive obstacle avoidance strategy is based on the limit-cycle approach. To manage the interaction between the controllers according to the context, the multi-agent system is proposed. 
Multi-agent systems are an efficient approach for problem solving and decision making. They can be applied to a wide range of applications thanks to their intrinsic properties such as  self-organization/emergent phenomena. Merging approach between control laws is based on their properties to adapt the control to the environment.
Different simulations on cluttered environment show the performance and the efficiency of our
proposal, to obtain fully reactive and safe control strategy, for the navigation of a UGV.
\end{abstract}

\begin{IEEEkeywords}
Multi-agent systems; Autonomous vehicles; Hybrid architecture; Obstacle avoidance.
\end{IEEEkeywords}


% For peer review papers, you can put extra information on the cover
% page as needed:
% \ifCLASSOPTIONpeerreview
% \begin{center} \bfseries EDICS Category: 3-BBND \end{center}
% \fi
%
% For peerreview papers, this IEEEtran command inserts a page break and
% creates the second title. It will be ignored for other modes.
\IEEEpeerreviewmaketitle


%==============================================================================
\section{Introduction}\label{sec:introduction}
One of the main motivation to use Unmmanned Ground Vehicles (UGVs) is decreasing the traffic congestion of vehicles in urban areas, with correlated pollution, noise and waste of time.
To obtain a transportation system, an automatic navigation of those vehicles is required with the some criteria which must guarantee the safety and comfort of passengers \cite{Daviet97}.

This paper deals with the navigation of urban vehicle in cluttered environment. The navigation task consists on reaching a defined target while avoiding detected obstacles
from real-time sensor measurements. To ensure the vehicles’ ability to accomplish a reactive navigation, it is proposed to explore behavioral control architectures originally proposed
by Brooks \cite{Brooks86}. This kind of control architecture breaks the complexity of the overall task by dividing it into several basic tasks \cite{Adouane09_a}. Each basic task is accomplished with its corresponding controller.

An important issue for successful autonomous navigation is obstacle avoidance. This function permits of preventing robot collision, thus ensuring vehicle safety. 
Many reactive approaches can be found in literature, such as obstacle avoidance using vortex fields \cite{DeLuca94} and orbital trajectories \cite{Kim03}. The last approach is built on circular limit-cycle differential equations in \cite{Kim03,Jie06,Adouane09_b}. Circular limit cycles are more stable than vortex fields and always converge to periodic orbits. This work uses elliptical trajectories that were presented in \cite{Adouane2011}. Therefore, more generic and efficient obstacle avoidance is performed, even with different obstacle shapes, for instance, long walls.

Most of the algorithms found in literature deal with only one fixed obstacle. However, they brings interesting properties such as reliability and continuity in control law. Dealing with several obstacles and with their potential moves is a much harder issue which requires an important embedded computational power so as to recompute at each timestep the correct trajectory under continuity constraint or a decrease in term of performances and precision.
The goal of this paper is to propose a method using a reactive multi-agent system to merge command laws of the control architecture (attraction to target controller and the obstacle avoidance controllers).

Multi-agent systems are an efficient approach for problem solving and decision making. They can be applied to a wide range of applications thanks to their intrinsic properties and features such as simplicity, flexibility, reliability, self-organization/emergent phenomena, low cost agent design and adaptation capacity. It has been shown that reactive multi-agent system are efficient to tackle complex problems \cite{ICTAI}, cooperation of situated agents/robots \cite{PAAMS2010}, data fusion and problem/game-solving \cite{Dimarzo2004}.
In this context, our proposal consists in decision making by evaluating emergent properties of agent's organization.

The paper is structured as follow: in the next section, the control architecture for the navigation of a UGV is introduced. The model of the UGV and its controllers are also detailed.
The multi-agents system and its proprieties applied to autonomous navigation are described in Section \ref{sec:multiag}.
Simulations showing the efficiency of our proposal are detailed in Section \ref{sec:simulation}. Finally, conclusion and future works are given in Section \ref{sec:conclusion}.

%==============================================================================
\section{Control architecture}\label{sec:ConArc}
The control architecture for a safe and smooth autonomous navigation of UGV is shown in Fig. \ref{fig:ArchitectureControle}.
It is designed for a UGV modeled as a tricycle robot. This architecture aims to manage the interactions among elementary controllers while guaranteeing the stability of the overall control \cite{Benzerrouk2012syroco}. The global navigation framework is operated by the \textit{Hierarchical action selection} block that selects the elementary controller (\textit{Target reaching} or \textit{Obstacle avoidance}) according to the context of the environment. Each elementary controller (cf. Fig. \ref{fig:ArchitectureControle}) provides as output ($\textbf{O}_{AT}$ or $\textbf{O}_{OA}$) a Control Input $\textbf{I}_{SP}$ to the \textit{Control law} block. 

In this work, a single control law for the UGV (tricycle robot) is used. It considers the vehicle postures and velocities. This control law allows the UGV to reach a static or dynamic target with a desired orientation and velocity (cf. subsection \ref{subsec:ConLaw}). The inputs of the control law (posture errors between the vehicle and its assigned target) are provided by the elementary controllers (cf. subsection \ref{subsec:ElemCon}). The control law is synthesized according to Lyapunov theorem (more details are given in \cite{Vilca2013IROS}). The main blocks of the architecture are detailed below.

\begin{figure}[!b]
  \centering
	\includegraphics[width=0.47\textwidth]{Figures/archControl_vICINCO.pdf}
  \caption{Control architecture embedded in the UGV for autonomous navigation.}
  \label{fig:ArchitectureControle}
\end{figure}

The \textit{Sensor Information} block incorporates the propriocetive and exteroceptive sensors such as range sensor, cameras, odometers and RTK-GPS. Its goal is to capture information related to the robot environment, mainly potential obstacles \cite{Clark2007,Das2002}. In the sequel, we assume that the UGV has a RTK-GPS and a LIDAR range sensor.

The control architecture uses a \textit{Hierarchical action selection} mechanism to manage the switches between the two elementary controllers (Behavior-based approach), \textit{Target reaching} and \textit{Obstacle avoidance} blocks, according to the formation parameters and environment perception. The hierarchical action selection mechanism activates the \textit{Obstacle avoidance} block as soon as it detects at least one obstacle which can hinder the future vehicle movement toward its dynamic virtual target (more details are given in \cite{Adouane2011}). It allows to anticipate the activation of obstacle avoidance controller and to decrease the time to reach the assigned target (static or dynamic).
%\begin{algorithm}[!ht]
%    \LinesNumbered \SetAlgoLined
%    \eIf{It exists at least one constrained obstacle}
%    {Activate \textit{Obstacle avoidance} controller\;}
%    {Activate \textit{Dynamic target reaching} controller\;}
%    \caption{Hierarchical action selection}
%    \label{Algo:HierarchicalAtionSelection}
%\end{algorithm}
In order to provide the enough overall details of the presented control architecture, the following subsections present the UGV model and the elementary controllers.

%******************************************************************************
\subsection{Vehicle modeling}\label{subsec:modelrob}
We assume that the UGV evolves in asphalt road and in cluttered urban environment with relatively low speed (less than $v_{max} = 2~m/s$). Hence, the use of kinematic model (which relies on pure rolling without slipping) of the UGV is sufficient.
The kinematic model of the UGV is based on the well-known tricycle model \cite{Luca98}. The two front wheels are replaced by a single virtual wheel located at the center between the front wheels. 
The equations of UGV model can be written as (cf. Fig. \ref{fig:kinmodel}):
\begin{equation}\label{eq:kinmodel}
\left\{\begin{array}{ccc}
\dot{x} & = & v\cos(\theta)\\
\dot{y} & = & v\sin(\theta)\\
\dot{\theta} & = & v\tan(\gamma)/l_b
\end{array}\right.
\end{equation}

where $(x,y,\theta)$ is the UGV posture in the global reference frame $X_G Y_G$. $v$ and $\gamma$ are respectively the linear velocity and the orientation of the vehicle front wheel. $l_b$ is the wheelbase of the vehicle.

\begin{figure}[!b]
	\centering
  \includegraphics[width=0.47\textwidth]{Figures/kinematicmodel_tricycle_vICINCO.pdf}
  \caption{UGV and target configuration variables in Cartesian reference frames (local and global).}
  \label{fig:kinmodel}
\end{figure}

%******************************************************************************
\subsection{Elementary controllers}\label{subsec:ElemCon}
Each elementary controller generates the control inputs $\textbf{I}_{SP}$ (posture errors $(e_x, e_y,e_\theta)$ and velocities $v_T$) of the \textit{Control law} block (cf. Fig. \ref{fig:ArchitectureControle}).

%******************************************************************************
\subsubsection{Target reaching controller}\label{sec:TarReaCon}
The target set-point modeling is defined as a point with non-holonomic constraints (cf. Fig. \ref{fig:kinmodel}). 
For static target reaching (\textit{point stabilization}, i.e., to reach a specific point with a given orientation), $v_T$ is not necessarily equal to zero; $v_T$ is then considered as a desired velocity value for the vehicle when it reaches the desired target posture $(x_T,y_T,\theta_T)$.

Before to introduce the control law, let us describe the following notations (cf. Fig. \ref{fig:kinmodel}):
\begin{itemize}	
	\item $I_{cc}$ is the instantaneous center of curvature of the vehicle trajectory, $r_{c} =  l_b/\tan(\gamma)$ is the radius of curvature and $c_{c} = 1/r_{c}$ is the curvature.	
	\item $\left( e_x, e_y, e_\theta\right)$ are the errors w.r.t local frame $(X_mY_m)$ between the vehicle and the target postures.
	\item $\theta_{RT}$ and $d$ are respectively the angle and distance between the target and vehicle positions.
	\item $e_{RT}$ is the error related to the vehicle position $(x,y)$ w.r.t the target orientation.
\end{itemize}

This controller guides the vehicle towards the static target. It is based on the posture control of the UGV w.r.t. the target (represented by errors variables $(e_x, e_y,e_\theta)$ in Fig. \ref{fig:kinmodel}).
These errors are computed w.r.t. the local reference frame $X_mY_m$ and they are given by:
\begin{equation}\label{eq:error}
	\left\{\begin{array}{cl}
	e_x = & ~~~\cos(\theta)(x_T - x) + \sin(\theta)(y_T-y) \\
	e_y = & -\sin(\theta)(x_T-x) + \cos(\theta)(y_T-y) \\
	e_\theta = & ~~~\theta_T-\theta 
	\end{array}\right.
\end{equation}

The error function $e_{RT}$ is added to the canonical error system (\ref{eq:error}) (cf. Fig. \ref{fig:kinmodel}). Let us now write $d$ and $\theta_{RT}$ as (cf. Fig. \ref{fig:kinmodel}):
\begin{align}\label{eq:distance}
	& \ \ \ \ \ \ \ d = \sqrt{(x_T-x)^2 + (y_T-y)^2}\\ \label{eq:angleRT}
	& \left\{\begin{array}{rlr}
	  \theta_{RT} = & \arctan\left((y_T-y)/(x_T-x)\right) & \mbox{  if } d>\xi \\
	  \theta_{RT} = & \theta_T & \mbox{  if } d\leq\xi\\
  	\end{array}\right.
\end{align}
where $\xi$ is a small positive value ($\xi \approx 0$). The error $e_{RT}$ is defined as (cf. Fig. \ref{fig:kinmodel}): 
\begin{equation}\label{eq:ert}
e_{RT} = \ \ \theta_T-\theta_{RT} 
\end{equation}

Furthermore, the velocity set-point $v_T$ of the static target is defined by the designer according to the task.
Finally the posture errors and velocities $(e_x,e_y,e_\theta,v_T)$ are the input of the \textit{Control law} block (cf. subsection \ref{subsec:ConLaw}).

%******************************************************************************
\subsubsection{Obstacle avoidance controller}\label{subsec:ObsAvoiCon}
Different methods can be found in the literature for obstacle avoidance \cite{Khatib86,Zapata04}. One of them is the limit-cycle method, the UGV avoids reactively the obstacle if it tracks accurately limit-cycle trajectories as detailed in \cite{Adouane2011}. The main ideas behind this controller are briefly detailed below: 

%\begin{figure}[!b]
    %\centering    
		%\includegraphics[width=0.45\textwidth]{Figures/EllipseCycleLimits.pdf}
    %\caption{Clockwise ($m=1$) and counter-clockwise ($m=-1$) shape for the used elliptic limit-cycles.}
    %\label{fig:LC}
%\end{figure}

The differential equations of the elliptic limit-cycles are:
\small{\begin{eqnarray}\label{Eq:ClockWiseCycleLimit}
\dot{x}_s = & m(B y_s + 0.5C x_s) + x_s(1  -  A x_s^2 - B y_s^2 - C x_s y_s) \\ \label{Eq:CounterClockWiseCycleLimit}
\dot{y}_s = & -m(A x_s + 0.5C y_s) + y_s(1  - A x_s^2 - B y_s^2 - C x_s y_s)
\end{eqnarray}}
\normalsize

with $m=\pm 1$ according to the avoidance direction (clockwise or counter-clockwise). $(x_s,y_s)$ corresponds to the position of the UGV according to the center of the ellipse. The variables $A$, $B$ and $C$ are given by:
\begin{align}
A = & (\sin(\Omega)/b_{lc})^2 + (\cos(\Omega)/a_{lc})^2 \\
B = &(\cos(\Omega)/b_{lc})^2 + (\sin(\Omega)/a_{lc})^2 \\
C = & (1/a^2_{lc}-1/b^2_{lc}) \sin(2\Omega)
\end{align}
where $a_{lc}$ and $b_{lc}$ characterize respectively the major and minor elliptic semi-axes and $\Omega$ gives the ellipse orientation. 

In our case, the controller can be written as an orientation control. We consider thus $e_x=0$ and $e_y=0$ in (\ref{eq:error}) (cf. Fig. \ref{fig:kinmodel}), i.e, the vehicle position is at each sample time in the desired position. The limit-cycle propriety allows to avoid the obstacles. The desired vehicle orientation is given by the differential equation of the limit-cycle (\ref{Eq:ClockWiseCycleLimit}) and (\ref{Eq:CounterClockWiseCycleLimit}):
\begin{equation}\label{eq:eq5}
	\theta_d = \arctan\left(\dot{y}_s/\dot{x}_s\right)
\end{equation}

Furthermore, the linear velocity of the UGV is decreased for safe avoidance when the obstacle avoidance controller is activated, e.g, $v_T=v_{max}/2$.

%******************************************************************************
\subsubsection{Control law}\label{subsec:ConLaw}
The used control law is designed according to Lyapunov stability analysis \cite{Vilca2013IROS}. \Added{The authors demonstrate that the vehicle converges to the static/dynamic target while this target is ahead to the vehicle w.r.t. its orientation. This condition allows to avoid local minimum}. The desired vehicle linear velocity $v$ and its front wheel orientation $\gamma$ that make the errors $( e_x, e_y, e_\theta)$ converge always to zero can be chosen as:
\begin{align}\label{eq:vcontroller}
v & = v_T\cos(e_\theta) + v_b \\ \label{eq:gammacontroller}
\gamma & = \arctan(l_b \left[ r^{-1}_{c_T}\cos^{-1}(e_\theta) + c_c\right])
\end{align} 
where  $c_c$ is given by:
\begin{align}\label{eq:vauxcontroller}
v_b = & K_x \left( K_d e_x + K_l d \sin(e_{RT})\sin(e_\theta) + K_o\dfeth c_c\right) \\\nonumber
c_c = & \frac{d^2K_l\dfeRT}{r_{c_T}K_o\dfeth\cos(e_\theta)} + \frac{K_{RT}\dfeRTs}{\dfeth\cos(e_\theta)} \\\label{eq:curvcontroller}
& + \frac{K_d e_y - K_l d\sin(e_{RT})\cos(e_\theta)}{K_o\cos(e_\theta)} + K_\theta\tan(e_\theta)
\end{align}

$\textbf{K} = (K_d,K_l,K_o,K_x,K_{RT},K_\theta)$ is a vector of positive constants which must be defined by the designer according to the desired convergence toward the assigned target (more details are given in \cite{Vilca2013IROS}).

%==============================================================================
\section{Hierchical action selection using a reactive multi-agent system}\label{sec:multiag}
\subsection{Global overview}
Different solutions can be chosen to combine the elementary controllers. The easiest one is to use a hard switch which select the application of the one controller according to the distance to the nearest obstacle. This solution provides good results in term of reliability and command law continuity. Nevertheless, it does not ensure adaptive behavior and a good level of comfort for passengers. 
%Until now, the current proposal is composed of two controllers, one of them being chosen at each time step depending on the distance to the nearest obstacle. 

The aim of this section is to introduce an adaptive system to combine \textit{target reaching} and \textit{obstacles avoidance} behaviors depending on the distribution of the obstacles perceived by the vehicle. The proposed switching system between controllers is based on a multi-agents system where the interpretation of agency on the organizational level provides a merged command.
The proposed approach is an extension of the model presented in \cite{Dafflon2013}. It is based on a situated virtual environment where data, provided by sensors, agents and target position are interacting together. The interactions between system entities are based on Newtonian laws. 

The merging process is made as follow (Figure \ref{fig:general}):
\begin{itemize}
	\item The agents virtual environment is built using the perception of obstacles that have to be avoided and integrates the position of the target relatively to current vehicle position. The dynamics of this environment is linked to the changes that may occur in vehicle perception.
	\item Agents, considered as elementary particles, interact with vehicle target position and obstacles representatives.
	\item Agents organisation is then measured by an external observer taking into account geometrical and dynamical aspects. This measure is then translated into a merge command law that takes into account both navigation goal and obstacle distribution.
\end{itemize}

	\begin{figure}[t]
	\centering
  \includegraphics[width=7cm]{Figures/general.png}
  \caption{Global architecture}
  \label{fig:general}
\end{figure}

This system is detailed in the following subsections.

%\subsection{Multi-agent system}
%the system owned to distributed system family inspired by physics. Is is composed of serveral components : agents population, set of entity, goal,...
%
%Interaction between system components used classical newton law influencing agents. Agent evolving in the environment like a particle moves in force field.
%
%Here, forces fields are generated by environment entity. Goal produces attractive spot, obstacle a repulsion area. These entity are a virtual representation of external constraints defined by  perceived obstacles.
 
\subsection{Model description}

\subsubsection{Agents Environment}
The agents environment is the key component in such approaches. It role is to link vehicles world (real or simulated) with the agent world. It integrates both the data provided by the two controllers presented above and the information furnished by the sensors. Basically, the data provided by the controllers are combined so as to produce an attractive spot for agents. The combination is made taking into account the distance of the nearest obstacle using the following equation:
\begin{equation}
	\vec{T(x,y)} = f(d)\vec{OA} + (1-f(d))\vec{F}
        \end{equation}
        
where
\begin{itemize}
	\item T : target position in vehicle referential
	\item OA : obstacles avoidance command
	\item F : path following command
\end{itemize}

The function $f$ is define as follow: $f(x)=(1+0.5 . x^2)^{-1}$ with $x$ the distance to the nearest obstacle. The value of $f(x)$ is high when the obstacle is close to the vehicle, consequently more importance is paid on obstacle avoidance.

As opposed to the treatment applied to the command laws, the positions of perceived obstacles are directly integrated into the virtual environment as aggregates of repulsive spots.


\subsubsection{Agents and Interactions} 
Agents are considered as small mass particles evolving in a force field. The force field is obtained combining repulsion forces induced by obstacles representatives and by agents themselves and attraction forces induced by the target to reach. Agents environment perception is realized through a circular frustum. A projection of the target position is made so as to be able keep it on frustum borders when out of range This projection allows agents to always know the target direction in the environment.

As explained before, interactions as composed of two categories: attractions and repulsions.
\begin{itemize}
	\item Interaction between agents and target (Attraction):
The attraction force generated by the target is computed as a linear force defined by:
\begin{equation}
\vec{F}=\beta_g m \vec{A_iT}
	\end{equation}

	\item Interaction between agents (Repulsion):
The repulsion between agents is generally introduced to ensure a homogeneous exploration of the environment avoiding false agents grouping. This repulsion is made by classical newtonian force in $r^{-2}$. If $A_i$ and $A_j$ are two agents located in $P_i$ and $P_j$, the repulsion force is given by:
\begin{equation}
		\vec{Fr_{ij}} = \alpha m_i m_j \frac{\vec{P_i.P_j}}{\|\vec{P_i.P_j}\|^3}
\end{equation}

	\item Interaction between agents and obstacles (Repulsion):	 
This interaction shares the same formulation as agents repulsion. This force could be generalized in a 2-dimensional space by:
	\begin{equation}
        \left\{
        \begin{array}{l}
        Fo^X_i=\sum_{o}\left(\Delta_o \cdot m \cdot m_o\frac{(x_i-x_o)}{((y_i-y_o)^2+(x_i-x_o)^2)^{3/2}}\right)\\
        Fo^Y_i=\sum_{o}\left(\Delta_o \cdot m \cdot m_o\frac{(y_i-y_o)}{((y_i-y_o)^2+(x_i-x_o)^2)^{3/2}}\right)
        \end{array}
        \right.
        \end{equation}
\end{itemize}

Interaction could be summarized by following chart (Figure \ref{fig:interaction}): 	
\begin{figure}[t]
	\centering
  \includegraphics[width=0.35\textwidth]{Figures/rio.png}
  \caption{Interactions applied to agents }
  \label{fig:interaction}
\end{figure}


\subsubsection{Agency evaluation}
The decision process is based on the evaluation of the distribution of agents. The system evaluates the repartition of agents population and defines a centers of mass. The result of this observation is noted $\overrightarrow{P_{dir} (x,y)}$ and correspond to a vetor from vehicle and center of mass od agent's population $P_{mean} (x,y)$ (Figure \ref{schemaAgents5}).

 $\overrightarrow{P_{dir} (x,y)}$ is a new command to control vehicle. The angle between $\overrightarrow{P_{dir} (x,y)}$ and local X axis is a steer angle adopted by the vehicle. The length of $\overrightarrow{P_{dir} (x,y)}$ corresponds to the speed command reference.

 \begin{figure}[!ht]
		\centering
		\includegraphics[width=0.30\textwidth]{Figures/directionDetail2.png}
		\caption{\label{schemaAgents5} Decision vector computation}		
	\end{figure}

%==============================================================================
\section{Validation}\label{sec:simulation}

%******************************************************************************
\subsection{Simulation tool }
Janus\footnote{http://www.janus-project.org/} is a multi-agent platform that was specifically designed to deal with multi-agent systems \cite {GaudGallandHilaireKoukam2008_16}. It is based on an organizational approach and  supports the implementation of the concepts of role and organization as first-class entities. This consideration has a significant impact on agent implementation and allows an agent to easily and dynamically change its behaviour.

%******************************************************************************
\subsection{Metrics and scenario}
In many cases, when a simulation is done, you have to make a conclusion from what have happened. Metrics have been defined to record some parameters during simulation time. They are useful to exploit post-simulation results.
\begin{itemize}
\item Steering angle metrics: This metrics records the steering angle command of the vehicle.
\item Speed metric : This metric records the speed command of the vehicle. 
\item Physical integrity metric : Physical integrity is the closest distance to the obstacles. Time to collision is also evaluated using the current velocity vector.
\end{itemize}
 
This paper presents a hybrid control architecture and multi-agent system. To illustrate our approach, simulations were made in Matlab$^\circledR$. The scenario is divided into two parts. In first, hard switch between obstacle avoidance and target reaching controllers is used. In second, multi-agents system adapt command taking account data environment. Scene is composed by four obstacles placed on vehicle path and simulation is stopped at the first collision.

%******************************************************************************
\subsection{Simulation results}

%******************************************************************************
\subsubsection{Scenario with a single obstacle}
\begin{itemize}
	\item \textbf{Hard switch.} \\
	Figure \ref{fig:trajectoire2obstacleIP} shows the trajectory of the vehicle using an immediate switch (hard switch) between the controllers, Attraction to the target (green) and Obstacle avoidance (magenta). The switch occurs when a hinder obstacle is detected.
	Figure \ref{fig:angle1obstacleIP} shows the evolution of the velocity and steering angle commands using the hard switch between the controllers. We can observe the discontinuity in the commands which can damage the actuators and can be uncomfortable for the passengers.

Figure \ref{fig:dis1obstacleIP} shows the distance to the obstacle using the hard switch between the controllers. We can observe that the vehicle navigation is safe.
\begin{figure}[b]
	\centering
	\includegraphics[width=0.40\textwidth]{Figures/res_IP/RobotTrajectory.pdf}
  \caption{Trajectory of the vehicle (Hard switch)}
  \label{fig:trajectoire2obstacleIP}
\end{figure}

\begin{figure}[!b]
\begin{minipage}{0.23\textwidth}
	\centering
  \includegraphics[width=\textwidth]{Figures/res_IP/Commands.pdf}
  \caption{Commands of the vehicle (Hard switch)}
  \label{fig:angle1obstacleIP}
\end{minipage}\hfill
\begin{minipage}{0.23\textwidth}
	\centering
	\includegraphics[width=\textwidth]{Figures/res_IP/Distance2obstacle.pdf}
	\caption{Distance to the obstacle (Hard switch).}
	\label{fig:dis1obstacleIP}
\end{minipage}
\end{figure}

\item \textbf{Multi-agent switch.}\\
Figure \ref{fig:trajectoire1obsteble} show the trajectory of the vehicle using the multi-agents switch. Result of combination of the two command (green).
		\begin{figure}[!ht]
	\centering
  \includegraphics[width=7cm]{Figures/res/1obs/trajMatlabfinal1obs.png}
  \caption{Trajectory of the vehicle (Multi-agent)}
  \label{fig:trajectoire1obsteble}
\end{figure}

Figure \ref{fig:angle1obstacle} shows the evolution of velocity and steering angle commansd using  multi-agents switch. We can observe no discontinuity during simulation. 	 
\begin{figure}[!ht]
	\centering
  \includegraphics[width=7cm]{Figures/res/1obs/degcitfinal1obs.png}
  \caption{Commands of the vehicle (Multi-agent)}
  \label{fig:angle1obstacle}
\end{figure}

\end{itemize}

%******************************************************************************	
\subsubsection{Scenario with four obstacles}
This simulation allows to increase the complexity of the environment. The vehicle navigates while avoiding the obstacles (clockwise and counter-clockwise sens).
\begin{itemize}
	\item \textbf{Hard switch.} \\
Figure \ref{fig:trajectoire4obstacleIP} shows the trajectory of the vehicle using a hard switch between the controllers. The switches occur when the hinder obstacles are detected (e.g., obstacles $3$ and $4$).
Figure \ref{fig:angle4obstacleIP} shows the evolution of the velocity and steering angle commands using the hard switch between the controllers. We can observe the discontinuity in the commands which can damage the actuators and can be uncomfortable for the passengers.

Figure \ref{fig:dis1obstacleIP} shows the distance to the four obstacles using the hard switch between the controllers. We can observe that the vehicle navigation is safe.

\begin{figure}[!t]
	\centering
  \includegraphics[width=0.40\textwidth]{Figures/res_IP/RobotTrajectoryNobst.pdf}
  \caption{Trajectory of the vehicle (Hard switch)}
  \label{fig:trajectoire4obstacleIP}
\end{figure}
 
\begin{figure}[t]
\begin{minipage}{0.23\textwidth}
	\centering
  \includegraphics[width=\textwidth]{Figures/res_IP/Commands_Nobst.pdf}
  \caption{Commands of the vehicle (Hard switch).}
  \label{fig:angle4obstacleIP}
\end{minipage}\hfill
\begin{minipage}{0.23\textwidth}
	\centering
  \includegraphics[width=\textwidth]{Figures/res_IP/Distance2obstacle_Nobst.pdf}
  \caption{Distance to the obstacle (Hard switch).}
  \label{fig:dis4obstacleIP}
\end{minipage}
\end{figure}

\item \textbf{Multi-agent switch.}\\
Figure \ref{fig:trajectoire4obsteble} shows the trajectory of the vehicle using a multi-agent switch. The gradual coverage of obstacle avoidance law allows a smoother path

\begin{figure}[!b]
	\centering
  \includegraphics[width=7cm]{Figures/res/4obs/trajMatlabfinal.png}
  \caption{Trajectory of the vehicle (Multi-agent)}
  \label{fig:trajectoire4obsteble}
\end{figure}

Figure \ref{fig:angle4obstacle} shows the evolution of velocity and steering angle during the simulation. We can see that the maximum speed is more important and that the steering angles are smoother when multi-agent merge is used.

We can summarize these simulations: the soft switch allows to move nearer obstacles has a greater speed while ensuring better security.

\begin{figure}[t]
	\centering
  \includegraphics[width=7cm]{Figures/res/4obs/degcitfinal.png}
	\vspace{-0.3cm}
  \caption{Commands of the vehicle (Multi-agent)}
  \label{fig:angle4obstacle}	
\end{figure}
\end{itemize}

%==============================================================================
\section{Conclusions}
\label{sec:conclusion}
In this paper, an approach of global control architecture was presented. Originally based on a hard switch, we propose a merging controls laws to adapt vehicle behavior to its environment.
To cope with the navigation of UGV in cluttered environment, a single control law is embedded in the UGV allowing the simplification of the proposed control architecture for autonomous navigation. The obstacle avoidance based on the limit-cycle guarantees the safe navigation in cluttered environment.
Merging system proposes an adaptive hierarchical action selection using reactive multi-agent system for control laws merging. 
Merge system is based on reactive multi-agent system. Environment data are provided by vehicle sensors and used to anticipate obstacles. Multi-agent system could be see as a security system when an obstacle avoidance controller failure.
This method was successfully tested in simulation and results obtained encourage us to test it using actual laboratory vehicles.

%==============================================================================
\section*{Acknowledgements}
We would like to thank Etienne FRANCOIS for simulation and development made during this work.

Those works are done with the support of the French ANR (National Research Agency) through the ANR-VTT SafePlatoon 6 project (ANR-10-VPTT-011).


% trigger a \newpage just before the given reference
% number - used to balance the columns on the last page
% adjust value as needed - may need to be readjusted if
% the document is modified later
%\IEEEtriggeratref{8}
% The "triggered" command can be changed if desired:
%\IEEEtriggercmd{\enlargethispage{-5in}}

% references section
\bibliographystyle{IEEEtran}
\bibliography{BiblioICINCO_IP}

% that's all folks
\end{document}


