%Different from robotics, sensors, pattern recognition and noise filtering can be skipped in the virtual environment.
Based on the sound propagation model, in order to make our virtual agent able to react to the sound in the environment, there are actually two steps in the process. The first step is sound localization, like the "hearing" process of the agent. It can detect the sound signal in the virtual environment and give an estimate of the source position. And in our representation the sound position estimate includes two parts: direction and distance.
The second step is the decision-making process which is discussed in section~\ref{section-auditory-steering}.

Most of the previous work in navigation is vision-based or geometry-based, for some of them, we may simply replace the accurate position of an agent (sound source) with the result of sound localization.
But for most of the previous work, sound localization is not accurate enough to support vision-based model and algorithm. Although adopting a high resolution sound map enables us to get an accurate localization, it's different from how people navigate: we do not calculate the exact position of a target, and instead we have a general judgement of the collision risk. So here we introduce a confidence judgement of localization and fuzzy logic, which uses inaccurate information to navigate and tries to avoid mistake behavior.


\subsection{Psychology Foundation of Auditory Localization}
Psychology experiments investigated vision and auditory perception in \cite{springerlink:10.3758/BF03211932} shows the mean error for different target azimuths was usually less than $5^\circ$ for vision and audition perception, and a 7-m change in target distance (from 3 to 10 m) could produce a change in mean indicated distance of 5.4 m for vision and of only 3.0 m for audition. The experiment demonstrates that the perceived egocentric distance of auditory perception exhibits more error than that of visual perception.

\begin{figure}[htb]
    \centering
        \includegraphics[width=5cm]{images//localization.jpg}\\
        \caption{An illustration of sound localization, the green mark is sound source, the red one is the agent (receiver), and the blue one is the estimated position of sound source(output of our algorithm). The "momentum" vector and the estimated distance d are shown in the picture, and the black rectangle shows the region to collect packets. Note that there many packets in every grid.}
    \label{}
\end{figure}

\subsection{Direction and Distance Estimate}
\subsubsection{Algorithm}

The algorithm we used for sound direction detection is tracing the flow of sound wave energy, and the output is a sound field gradient which reveals the position of source.
%We define the center of the sound energy as the average position of sound wave energy using energy value as weight, just as calculating the center of an object using their mass or gravity as weight.In other word,
Inspired by the concept of wave-particle duality in quantum mechanics, we make an analogy here: if we regard the sound packets as virtual particles with mass (energy), the source is keep producing "particles" and we can calculate the "momentum" vector of the region (shown in Figure~\ref{localization}) as follows:

\begin{equation}
  \overrightarrow {M} = \sum_{all packets}{\overrightarrow{V_i}*E_i}
\end{equation}

\begin{equation}
    confidence = max( \frac{|\overrightarrow {M}|}{M(d)},1 )
\end{equation}

%TODO: mark the images with the corresponding symbols.

$\overrightarrow{V_i}$ and $E_i$ are velocity and energy of a sound packet respectively, and $\overrightarrow {M}$ represents the "momentum" vector in the region. And $M(d)$ is the length of "momentum" vector recorded at distance $d$ in a condition that there is no obstacle between the receiver and sampling point (as shown in Section~\ref{section-confidence-of-sound-localization} and Figure~\ref{confidence}), and $d$ is the estimated distance.
The formula also means calculating all packets in the neighbor grid(s) in a certain period. And the momentum vector represents the direction of sound source or sub-source. Although every sound packet's velocity only has four possible directions$\{N,S,W,E\}$, our experiment shows that sound packets in one gird is enough to give an acceptable result (which means there is no noticeable error). The length of sampling period of perceived signal is called "time window" here. When the "time window" is short, we only collect the sound packets propagating along the shortest path from sound source to the agent. Few echoes are included in a short "time window", so a short enough "time window" is echo-free and we don't need to deal with the reverberation, which draws a lot of attention for sound localization of robots. In psychology, %funneling models []and
it has been proposed that the human's sound localization prefer the direction of first-arriving sound or so-called direct sound, which arrives at a given position before reverberation does \cite{citeulike:6483578}\cite{MKD}, and that is similar to our short window approach. By keeping a short "time window", the echo filtering and signal processing are skipped.
Here the distance estimate is interpolated according the real-time received energy and pre-computed energy in case of no obstacle, and this energy value is recorded by experiment using sound propagation model built in previous section.
%In the process of approaching the sound source, the obstacle will have less influence and thus the estimate will become more accurate.

\subsubsection{Confidence of Sound Localization}
\label{section-confidence-of-sound-localization}
We now have introduced a method for sound localization, but we also need a judgment of the sound localization confidence. In certain conditions the localization is ambiguous and the localized position itself is insufficient to describe what a human has perceived, so it's very necessary to introduce the confidence of sound localization in auditory perception. When the auditory information is very fuzzy, which might be caused by multiple reverberations, the localization is less believable, and thus the weight or priority of this sound source should be small to provide reference for decision making process in next section.
Below is an explanation of how our confidence algorithm works:

\begin{figure}
    \centering
        \includegraphics[width=8cm]{images//confidence.jpg}\\
        \caption{An illustration of localization confidence. The red mark and the blue mark are sound source and receiver respectively, and the green marks are major sub-sources and each of them has a contribution(black vector) to the total "momentum" vector (red). From left to right, we can see the sub-sources become more disperse and confidences decrease from 1 to 0.7, and to 0.5 . }
    \label{confidence}
\end{figure}

Remember that we have introduced a vector called "momentum" to judge the direction of sound source.
When there are multiple sound sources or sub-sources in the sound map, we will simply get the vector sum of "momentum" caused by each sound source if the sound packets from these sources arrives agent simultaneously. When the sound sources or sub-sources share similar directionality, the sum of these momentums will be strengthened, so the confidence value is great. When they are in the different directions, the sum will be weakened. For single source condition, a greater confidence value means that the directions of all of the sub-sources are similar, or obstacles have little influence.

\subsection{Velocity Estimate}
The direct way to get velocity is to calculate the gradient of position, which has some problems for sound localization. Take human auditory localization for example, psychology experiments show the direction is more accurate than distance \cite{springerlink:10.3758/BF03211932}. In other word human is more sensitive to azimuth than distance in auditory localization, and due to the difference in sensitivity, calculating a total velocity will lose the accuracy in direction estimate. Their accuracies are very different so it's very natural and necessary to treat angular and radial velocity independently, which correspond with the two major factors (shown in next section) in human's perception of collision avoidance respectively.

%Different from vision localization, since sound localization is less accurate in distance than geometry-based method especially when there are obstacles in the map, the noise of tangential speed estimate can be very significant.
In sum, we use a representation of target speed adapting to our sound localization accuracy, and we get and use the angle speed and tangential speed separately.
%In next section, angle and tangential speed are used in the steering respectively.



