In this paper, we present a method to simulate the agent's reaction to sound source in the virtual environment.
On the one hand, recently the community of computer graphics is becoming increasingly interested in sound simulation, especially on sound generation and propagation.
On the other hand, current agent in crowd simulation lacks a correct mechanism to react to the acoustic phenomena in the virtual environment, even though the scene is fully of sound signals.
These gaps inspire us to take sound into consideration in the agent simulation.
In our proposed method, the simulation consists of three components: sound propagation, sound localization, and auditory steering.
Traditional agents utilize vision-based information for steering in the virtual environment.
But can simulate the behavior of virtual 'blind' agents who can use only sound-based information.
Furthermore we can combine various modalities including vision and sound based ones, to construct a synthetic audition-vision steering model.
In this paper, the perception of audition and vision is limited to steering, which is the most noticeable behavior of virtual agent, while localization can also provide interface for other auditory perception.

In the traditional methods, since no sound is actually being propagated or simulated, the agent is only able to perceive sound and transmits semantic message within a given range based on distance function.
The sound intensity p perceived by the agent is computed using the distance d from the sound source to the agent as follows
\cite{BAVA}:

\begin{equation}
  p=\frac{log(I)}{d^2}
\end{equation}

In most crowd and agent simulation systems, perception subsystem includes a set of individual sensors, which usually incorporates the vision, and occasionally the hearing component. These sensors are aimed to provide as much information as they can for the virtual humanoids, but they not far from completely utilized yet, because they are not fully physically based and thus do not always reflect the corresponding cases in the real world.
For example, the agent should be able to stop at the wall corner in order to avoid collision when he hears the sound of another agent approaching from the other side, but the vision-based steering method does not do so.
Similarly, when someone behind the agent is walking faster he will accelerate or make way for the agent behind him, although in both cases he doesn't "see" the agent.

In summary, our motivation is to introduce a physically based sound model and a psychologically based auditory localization and steering model into agent and crowd simulation, which will improve the agent's steering behaviors as completely and realistic as possible, in order to fully match with the human's navigation actions.

%TODO: To provide a overview system diagram and describe its pipeline.
\begin{figure}[htb]
    \centering
        \includegraphics[width=6cm]{images//framework.jpg}
        \caption{An overview of our system framework. Navigation controller executes A* algorithm to find the shortest path from agent to the goal or sub-goal and prevents agent from colliding with static obstacles. Other parts will be explained in the following sections respectively. }
    \label{}
\end{figure}


The major contribution of this paper is thus as follows:

\begin{itemize}
  \item We demonstrate a method to simulate sound localization and its confidence judgement of virtual agent, based on the sound propagation model, Transmission Line Matrix method;

  \item We propose a rule-based auditory steering method which employs fuzzy logic and the composite controller synthesized from basis cases.
\end{itemize}

The rest of this paper is organized as follows:
\begin{itemize}
  \item Sound propagation model. The first step is to build a computational model for sound wave propagation. But most of the classic acoustics simulation methods are computational expensive and offline, which make then different to be incorporated to agent simulation. We then introduce the TLM method, which is originally designed for electromagnetic wave and is capable of acoustics simulation, and most significantly it is real-time.

  \item Sound localization. Based on TLM model, we introduce a sound localization method and its confidence judgement, which is preparation for perception step, since our focus of auditory perception is steering whose input is position information.

  \item Auditory steering and perception. With position information, we can make the agent's reacting to the sound more realistic. In this section we will discuss some basis cases.

  \item Results and discussion. We will describe experiments and the performance of our method in the result and discussion section.

  \item Conclusion and future work. We will conclude and propose future work in the final section.
\end{itemize}
