In the last section, we describe how to get variables of: Direction, Distance, Confidence, and Velocity which are utilized to localize the locomotion of the sound source and to guide the steering.

\begin{figure}[htb]
    \centering
        \includegraphics[width=4cm]{images//region.jpg}
        \caption{Decomposition of regions around the agent. In the overlap region, more than one case is triggered. And door, corner, choking cases are not showed in this figure.}
    \label{}
\end{figure}

\subsection{Environment Perception of Virtual Agent}
We build a rule-based system which is a widely used method in steering and the agent's auditory perception here is limited to collision avoidance(CA).
Line-of-sight identifies which object can be seen from the position of the virtual agent. A reasonable and prevalently accepted assumption is that the agent can only see the objects in front and beside, but not the objects behind the agent, which means the angle between the head orientation and line-of-sight should be no greater than a given threshold. The region behind the agent is a blind region for vision, but not for audition, and then the audition is a necessary supplement for virtual perception.

\subsection{Sound-Based Steering}
Recently a synthetic-vision based steering approach has been proposed \cite{Ondrej:2010:SBS:1778765.1778860}, based on work in cognitive science\cite{CVB}, two factors that are most important for collision avoidance are considered in their model: angular speed $\dot{\theta}$ and time-to-collision (ttc), which is calculated by $r/v_r$.

%\subsubsection{Rule Based Model}
%Rule based model describes agent movement through a set of basic rules which are triggered in certain cases. The model was firstly introduced in Reynolds' boids system. Agents apply collision detection and avoid colliding with other agents \cite{Pelechano}.
In our model, if the original motion incurs collision, a subgoal is set so the agent will turn or stop (by setting subgoal to it's current position).



\subsubsection{Fuzzy Logic Controller}
One significant difference between the sound-based and vision-based steering is that the former doesn't give precise position information (especially the distance) as accurate as the latter. So the problem is how to navigate without accurate position information and avoid artifacts(mistake behaviors). We are able to calculate an accurate position by adopting a high-resolution grids, but it brings about more computation in the simulation. So we take adopt a bionic approach: to utilize the inaccurate position information to navigate. A membership function is introduced to reduce the artifacts that this inaccuracy brings about: although the inaccurate position information might trigger incorrect response, the lower membership number will assign the response with a lower weight which is calculated by membership function, thus relieve the side effects of artifacts.
%Compared with vision steering, the auditory steering comes into effect at a relatively long distance.
The weight represents the danger of collision and the fuzzy situation-awareness in each case, and it's calculated as follows:

\begin{equation}
W(\theta,r,v_r,\omega)=\Theta(\theta)*R(r)*T(r/v_r)*\Omega(\omega)
\end{equation}

\begin{figure}[htb]
    \centering
        \includegraphics[width=4cm]{images//position.jpg}
        \caption{Input of the steering strategy.}
    \label{position}
\end{figure}

Here, $\theta$,r,$v_r$ are shown in Figure~\ref{position}, w is the $\dot{\theta}$ in Figure~\ref{position}. Membership function $\Theta(\theta),R(r),T(r/v_r),\Omega(\omega)$ are shown in Figure~\ref{membership}.
%Note that threshold in function $V(v_r)$ is very important for inaccuracy-tolerant.
This formula works for front, back, and side cases shown later. And fuzzy logic also ensures that one case changes to another smoothly in the overlap region. 

\begin{figure}[htb]
    \centering
        \includegraphics[width=8cm]{images//membership.jpg}
        \caption{Membership function.}
    \label{membership}
\end{figure}


\subsubsection{Division of Basis Cases}
In our model, six basis cases are chosen to best demonstrate the novel results of incorporating sound modality into agent steering, and new cases could be easily included by defining the situation settings and related fuzzy weights.
%such as pushing in condition of physical body contact \cite{Pelechano}.
Each case has a case weight, which determines the \emph{hierarchical structure} of cases. Cases in each level have similar weight, but high level cases have greater weights by an order of magnitude than low level cases. In this way, different cases have different \emph{priority}. \emph{High level} cases include wall corner case, choking point case and door case, and they represent a higher level of perception in steering.


\emph{Low level} cases include front, back and side cases, which deal with normal steering problems. Some related work has researched how human hits a moving target \cite{HMT}. Regions around the agent is decomposed into three regions. In vision-based steering \cite{Ondrej:2010:SBS:1778765.1778860}, only objects in front and part of side regions are considered. It's reasonable for vision steering because agent can only see other agents in the visible region in front, but for sound steering an audible target in the side or back region will also trigger response. If we consider relative motion and velocity, when a collision is about to occur (which means ttc>0 and $\dot{\theta}$ near to 0), one naive method to avoid collision is to add a velocity vector to the agent current velocity vector, or it can be explained as adding a social force that makes the agent tend to keep away from the target. But it only increases ttc or delays the collision and does not eliminate incoming collision, so the added velocity vector should turn a little angle (proportional to the collision risk and up to 15) toward the side of current velocity vector of the agent shown in Fig[](TOBEADDED).


If we adopt the above CA approach, there is one problem that if the added velocity vector's direction is similar or opposite to that of the current velocity vector, it will cause unreal accelerate (for target in back region) or decelerate (for target in front region), and it's not how human navigate: we will turn to avoid a target in front and make way or accelerate for a back target. In sum, it's based on the \emph{observation that humans turn to avoid collision in front, make way or accelerate for a back fast moving target, and tend to keep away from the target coming from left or right.}



\begin{figure}[htb]
    \centering
        \includegraphics[width=8cm]{images//cases.jpg}
        \caption{An illustration of different cases.}
    \label{}
\end{figure}

\subsubsection{Combination of Cases and Strategies}
%The case weight measures the collision risk in each case. With great risk comes great case weight. 
Since there are only 6 cases here, in general no more than two cases is triggered simultaneously and in most circumstances only one case is triggered in our system. But designing a combining strategy enables us to extend the behavior set. When more cases or more targets are considered, Action Selection Mechanism (ASM) plays a important role in combining all of them. This is not the focus of this paper because there are already some related work in graphics and robotics \cite{4295541}\cite{Wang2008625}.

\begin{equation}
w_i=conf_i*case_i*m(v,\omega,d,\theta)
\end{equation}

\begin{equation}
v=\frac{ \sum{v_i*w_i} }{ \sum{w_i} }
\end{equation}

\begin{equation}
\overrightarrow{P}=\frac{ \sum{\overrightarrow{P_i}*w_i} }{ \sum{w_i} }
\end{equation}

$v_i$ and $\theta_i$ are respectively the speed and subgoal coordinate recommended by the $i_{th}$ case, $w_i$ is the weight of the $i_{th}$ individual case. The weight is calculate by the membership function using fuzzy logic. v and $\overrightarrow{P}$ are the final speed and the subgoal coordinate respectively. In fact, $\overrightarrow{P_i}$ also includes the turning angle information implicitly in front, back and turn cases, in which the turning angle is proportionate to localization confidence and membership function. So if the agent perceived an ambiguous sound or collision with the source is not likely to happen, the turn behavior is too slight to be noticeable.

\subsection{Basis Cases And Results}
We then use sound to solve some cases and here sound is limited to the sound of agent's step. In the following cases, we assume that the agent cannot see each other, but one agent can hear another and then react. Relative speed is used to determine the collision avoidance(CA) behavior. The relative speed includes relative radial speed and relative angle speed.


\subsubsection{Front Case}
In this case, two agent walking face to face.
The case is triggered when: relative angle speed is less than a threshold; relative radial speed is greater than a threshold; the target is in front of the agent, which means relative angle is less than a threshold. Of course the threshold here is fuzzy boundary as discussed in last the subsection.
The membership number is calculated:
when the front target is very near, weight of front case is high;
when the front target has a low angle speed, the weight is assigned a high value;
when the radial speed is high, the weight of the case will be high.

\begin{figure}[htb]
    \centering
        \includegraphics[width=8cm]{images//front.jpg}
        \caption{Front case. One agent hears that another agent is walking to him face-to-face and he changes his orientation(to turn right) and make way for the latter.}
    \label{}
\end{figure}

\subsubsection{Back Case}
In this case, one agent is chasing another. In this case, the target is outside the sight cone region and the vision-based steering strategy should not be triggered. But if the target is producing sound, the sound-based steering strategy will be triggered, which is same to the process of human perception.
The trigger condition is similar to that of front case, and the only difference is that relative angle should be greater than a threshold. And the membership function is same as front case.

\begin{figure}[htb]
    \centering
        \includegraphics[width=8cm]{images//figure6.jpg}
        \caption{Back case. (From right to left) The front agent hears that another agent is walking behind and he changes his orientation(to turn right) and make way for the latter.}
    \label{}
\end{figure}

\subsubsection{Side Case}
Regional sectors are labeled as front, back, left and right. The last two sectors are classified into side case.
The trigger condition is similar to that of above two cases, and one difference is that relative angle should be within a certain range. And the membership function is same as front case.

\begin{figure}[htb]
    \centering
        \includegraphics[width=8cm]{images//turn.jpg}
        \caption{Side case. One agent hears that another agent is walking to him from right hand side and he turns left.}
    \label{}
\end{figure}

\subsubsection{Corner Case}
Image that you are walking to corner and hear someone is approaching the corner from another side, and in this situation you might stop in order to avoid collision. Here a "polite" agent will also stop at the corner, and this case cannot be solved by previous approaches. The wall corner case in fact represents a lot of cases that the agent can hear an incoming agent that cannot be seen.
According to Huygens principle, the corner is a sub-source, so the agent will localize the source at the direction of corner, and the estimated distance will decrease while the target is approaching. Here the stop behavior is triggered when someone in front of the agent is approaching but the agent cannot see the target. In this case the turn behavior is also possible to be triggered as shown in the following figures, and it depends on the distance to the wall: further the distance, more likely turn behavior is triggered.

\begin{figure}[htb]
    \centering
        \includegraphics[width=8cm]{images//wallcorner.jpg}
        \caption{Corner case. One agent hears that another agent is approaching from the other side and he stops (first and second pictures). And when then agent is further from the wall, the turn behavior is triggered(last picture) so the agent wouldn't stop but turn right to avoid collision.}
    \label{}
\end{figure}

\subsubsection{Door Case}
When human is about to enter a door, if someone closer to the door is also walking toward the door, the human will stop and make way for the latter until the latter has walked through the door. Here we use a similar strategy to avoid collision for virtual agent. At the position of each door, a collision box is located, and the agent will avoid enter this box if another agent from a different side is in the box and stand by until the box is empty again. In this way we avoid agent walking face to face as in many system.

\subsubsection{Choking Point Case}
A choking point can be regarded as a series of door connected together. So the choking point case is solved by adopting door case strategy above for serial times.

\begin{figure}[htb]
    \centering
        \includegraphics[width=8cm]{images//door-choke.jpg}
        \caption{Door case (left): One agent hears that another agent is coming from another side of the door and wait in front of the door. Choking point case (right): one agent hears that another agent is walking to him face to face in the choking hallway. Only he walks backward can they all reach their goals.}
    \label{}
\end{figure}

\subsubsection{Synthetic audition-vision perception}
Human integrate perceived audition and vision information and then give a synthetic behavior. People will pay more attention to the agent or obstacle with sound, so the sound sources should be assigned a great weight. Just like a real human, different method of perception should not work independently, but work together and interact with each other. For example, when an agent hears someone is walking behind him, he might turn around to find who is coming and the line-of-sight will change its direction accordingly. Then the agent will react to the object: if the object is something scaring, he might speed up; if the object is something not particular, he might just ignore it; if it's his friend, he might stop and say "Hi".
Above we list some basic cases and present the desired behaviors in different situations. We only give one possible behavior in each case, but the output behaviors are open, since people have personal preferred behaviors, which are related to their habits, personal experience and maybe personalities.
Synthetic sound-vision perception here means the sound and vision information are all used, for example, the sound perception is used to trigger the case and compute the weight, vision information is used for the collision avoidance, which accords with human's perception process: the sound of the target draws our attention and we look at the target so we get a complete information about the target.
%\subsection{An Abstract Model: Black Box}
The meaning of introducing acoustics for steering is that it provides information for an unseen object, and more specially in situations if the agent is located in a dark environment without light or we can image as if there is a virtual "black box" or blind region which only emits sound outsides.

\subsection{A simplified approach of sound steering}
The sound localization is the most costly part in our system, but it only provides less information compared with line-of-sight, which costs little. So in most applications like computer games, it is not necessary to simulate the whole process of sound propagation. In our model, the agent filters wave frontier (first arrived sound packets) to localize. Since the first arrived packets propagate along the shortest path, so Dijkstra algorithm or A* algorithm will give the shortest path. According to Huggers principle, wave-frontier approach will localize the last turning point in the shortest path as the estimated source position.
So in this way, only a pathfinding algorithm is executed and in general it will give a similar result with the approach in which sound is actually propagated.
