\section{Related Works}
\label{sec:RelatedWorks}
%Researchers have proposed various global illumination algorithms, such as radiosity, ray tracing, and photon mapping, for adding more realistic lighting to 3D scenes \cite{PDutre2006a}. Our approach uses photon mapping \cite{Jensen1996a} for realistic volumetric rendering, as photon mapping is more flexible and general and can handle almost all rendering effects. We also integrate Precomputed Radiance Transfer (PRT) into photo mapping to enable real-time volumetric rendering.

This section reviews closely related work on Precomputed Radiance Transfer (PRT) and volumetric global illumination. For a detail survey of other global illumination algorithms, the interested readers may refer to a book~\cite{Dutre:2006:AGI}.

%Tailored precomputed radiance transfer is incorporated into photo mapping to enable realtime rendering.

%Our work is closely related to previous work on volumetric global illumination and Precomputed Radiance Transfer (PRT) for realtime, realistic rendering.
% \paragraph{Photo Mapping}
% Photo Mapping is a two-pass rendering method introduced by Herik Jensen \cite{Jensen1996a} for global illumination.

\textbf{Precomputed Radiance Transfer}
PRT was first proposed by Sloan et al.~\cite{Sloan:2002:PRT} and has been adapted to simulate various global illumination effects in dynamic lighting environments that can be static scenes, dynamic scenes, distant illumination, or local lights~\cite{Ramamoorthi:2009:PBR}.
As a preprocess, PRT produces a set of transport vectors for capturing various illumination effects through a low-order spherical harmonics (SH) lighting basis. The precomputed transport vectors can then be used to relight global illumination style images in real time by the inner product of a light vector with the transport vectors.
The original PRT and its variations were initially used for only low-frequency environments~\cite{Sloan:2002:PRT, Lehtinen:2003:MRT, Sloan:2003:CPCA, Sloan:2005:LDPRT}.
%Principal component analysis (PCA) \cite{JLehtinen2003a} or conditional PCA \cite{PSloan2003a} were used to compress the SH basis set (transport vectors) by exploiting the coherence among vertices.
%Sloan et al. \cite{PSloan2005a} employed zonal harmonics (ZH), a set of rotation-invariant harmonic functions, rather than SH to yield a more compact representation for low-frequency signals.
To account for all-frequency environments, researchers have extended the PRT methods by sophisticated compression techniques, such as BRDF factorization~\cite{Liu:2004:AFPRT,Wang:2004:AFR}, spherical radial basis functions~\cite{Tsai:2006:AFPRT}, and non-linear wavelet approximation~\cite{Ng:2003:AFSW,Ng:2004:TPW,Wang:2006:AFRGO}. Green et al.~\cite{Green:2006:VDPLT} presented a hybrid PRT algorithm in which view-independent effects were approximated with the SH or wavelet basis and view-dependent effects were modeled with Gaussian functions. Ramamoorthi~\cite{Ramamoorthi:2009:PBR} provided a nice survey on recent developments of PRT.

Although there is a large body of literature on PRT for polygonal models, volumetric models have received far less attention.  Sloan et al.~\cite{Sloan:2002:PRT} discussed how PRT can be applied to volumetric data, but at the cost of substantial pre-computation. Ritchel~\cite{Ritschel:2007:FGVC} accelerated the PRT pre-computation process for volumetric data by a hierarchical visibility approximation implemented entirely on the GPU. Moon et al.~\cite{Moon:2008:EMSH} described a volumetric rendering method that supports multiple scattering effects based on volumetric grids of spherical harmonic for rendering hair. Zhou et al. \cite{Zhou:2008:RTSR} presented a GPU-accelerated approach for rendering volumetric smoke under dynamic low-frequency environment lighting using a set of radial basis functions (RBFs). These PRT methods usually requires extensive pre-computation for reasonably large volume data, especially when visibility changes. %Kronander et al. \cite{JKronander2011a} adapted PRT techniques to reduce pre-computation time and allow for interactive updates of visibility of volumetric data. In contrast to these methods, our approach focuses on adapting PRT techniques to photon mapping which can support almost all general rendering effects such as caustics and translucent effects.

%\textbf{Illumination Models in Volume Rendering}
%Researchers have started to adopt various illumination models in direct volume rendering  for capturing realistic lighting effects. Several illumination models, which includes a model for %multiple scattering, have been evaluated by Max in his seminal work~\cite{NMax1995a}. He derived the differential or integral equations for light propagation in different models, and %presented offline calculation methods for solving them. Kniss et al. \cite{JKniss2003a} introduced an interactive shading model to approximate various illumination effects such as volumetric %shadows, forward scattering, and chromatic attenuation in texture based volume rendering. Desgranges et al. extended this method to create dilated shadows in the light direction %\cite{PDesgranges2005a}. Schott et al. \cite{MSchott2009a} further improved the method of Kniss et al. \cite{JKniss2003a} and proposed an occlusion-based shading approach to render occlusion %effects interactively. Although it allows for interactive specification of multidimensional transfer functions, it requires extensive pre-computation if a user change the viewpoint. %Furthermore, the light and the view directions are required to coincide in the method. {\v S}olt{\' e}szov{\'a} et al. \cite{VSolteszova2010a} extended this method by removing the %coincidence constraint and proposed a multidirectional occlusion model using a tilted cone-shaped function to estimate light transport.
%All these methods are strictly constrained to a single light source and rendering based on texture slicing.

%Researchers have also developed various ambient occlusion techniques for enhancing depth perception of structures and better revealing their spatial relations in real-time direct volume %rendering \cite{TRopinski2008a,FHernell2010a}. Although ambient occlusion is regarded as a global illumination method, it is a very crude approximation to full global illumination in %contrast to other global methods \cite{FQiu2007a,TRopinski2010a}.
%Hernell et al. \cite{FHernell2008a} proposed a technique  for efficient global illumination in volume rendering by combining local shadows and global shadows. Qiu et al. \cite{FQiu2007a} %introduced a volumetric global illumination model based on the ace-Centered Cubic (FCC) lattice. Rezk-Salama \cite{CSalama2007a} described a GPU-based Monte-Carlo volume rendering technique %to capture various lighting effects including scattering, environment mapping, and ambient occlusion. An interactive volumetric lighting model was introduced by Ropinski et al. %\cite{TRopinski2010a} to simulate illumination scattering and shadowing effects to enhance depth perception in volume rendering. These approaches can render volumes at reasonably good frame %ratea, but it is still difficult for them to achieve real-time performance in dynamic lighting environments.

%PRT techniques have been applied to direct volume rendering for capturing real-time global illumination effects \cite{FHernell2008a, JKronander2011a}. Lindemann and Ropinski %\cite{FLindemann2010a} presented a modified SH projection technique for complex material properties to support reflectance and scattering illumination effects in real-time volume rendering. %Their approach allows users to edit material properties and change illumination conditions interactively, but it does not enable interactive visibility updates. To address this problem,  %Kronander et al. \cite{JKronander2011a} presented a global illumination model for direct volume rendering using PRT to enable real-time illumination and visibility updates in dynamic %lighting environments.

Compared with existing illumination models for direct volume rendering, our approach employs PRT techniques novelly for rendering volumes by Photon Mapping. Therefore, our approach can almost simulate all global illumination effects while ensuring real-time performance.
%However, these techniques have not been widely used because of the substantial computation and pre-computation required. In practice, most volume rendering systems  use only local illumination models such as the Phone shading model and Blinn-Phong model to achieve interactive performance.


