In this paper, we have discussed the whole process of simulating virtual agent with sound and hearing: sound modeling, localization and perception. And we have applied audition to steering and demonstrated several cases, which shows that agent with sound and hearing behaves more completely and realistically.

But in our paper auditory perception is limited to steering and collision avoidance, without speech and communication. For example, generating human-like conversation between agents\cite{SPMIC} or even between agent and human. In our model, only an energy value is contained in the sound packet, which can convey more information such as semantic message, an segment of record or even computer generated sound. There might be two possible approaches: the first is that speech signals are contained in the sound packets and propagating in the virtual world, and the agent processes perceived signals using pattern recognition and natural speech processing techniques, which also provide a human computer interface with speech and makes it possible that human directly communicates with virtual agent. Another approach is that only semantic information is contained in the packets and signal processing part is skipped, which makes the simulation more efficient.

Apart from rule based method utilized in this paper, the sound localization is also capable of integrating with social force model and hybrid approaches such as HiDAC \cite{Pelechano}.
