\section{Overview}
\label{sec:Overview}
In general, we separate the illumination on volumes into local lighting and global lighting where the global part contributes to volumetric shadow and scattering effects. Based on the assumption, our technique consists of two stages:
\begin{enumerate}
\item \textbf{Radiance Estimation} - At the pre-processing stage, we estimate the radiance at each voxel, which come from global lighting. In order to make it general, we estimate the radiance under different lighting environments where these lighting conditions form a set of orthogonal basis functions of a certain lighting space. In particular, we use spherical harmonics as the basis functions which is suitable to approximate low frequency lighting environments. For each lighting environment that corresponds to a certain spherical harmonic basis, we evaluate the radiance at the center of each voxel based on a volume photon map. The results are encoded and stored into hard disk.
\item \textbf{Realtime Volume Rendering} - At the interactive rendering stage, we combine the results of local and global lighting at each sampling point from volume ray casting. The global illumination effects are recovered by expanding the dynamic lighting environment using spherical harmonics and linearly combine the pre-computed radiance under each lighting basis. This process is much more efficient than evaluating the radiance directly such as using Monte-Carlo ray tracing. Therefore, interactivity is achievable and we only need the extra space to store the incident radiance.
\end{enumerate}
The main challenge of this work is how we evaluate, encode and recover the radiance at voxels as accurate as possible with limited additional storage and runtime cost. 