% This is "sig-alternate.tex" V1.3 OCTOBER 2002
% This file should be compiled with V1.6 of "sig-alternate.cls" OCTOBER 2002
%
% This example file demonstrates the use of the 'sig-alternate.cls'
% V1.6 LaTeX2e document class file. It is for those submitting
% articles to ACM Conference Proceedings WHO DO NOT WISH TO
% STRICTLY ADHERE TO THE SIGS (PUBS-BOARD-ENDORSED) STYLE.
% The 'sig-alternate.cls' file will produce a similar-looking,
% albeit, 'tighter' paper resulting in, invariably, fewer pages.
%
% ----------------------------------------------------------------------------------------------------------------
% This .tex file (and associated .cls V1.6) produces:
%       1) The Permission Statement
%       2) The Conference (location) Info information
%       3) The Copyright Line with ACM data
%       4) NO page numbers
%
% as against the acm_proc_article-sp.cls file which
% DOES NOT produce 1) thru' 3) above.
%
% Using 'sig-alternate.cls' you have control, however, from within
% the source .tex file, over both the CopyrightYear
% (defaulted to 2002) and the ACM Copyright Data
% (defaulted to X-XXXXX-XX-X/XX/XX).
% e.g.
% \CopyrightYear{2003} will cause 2002 to appear in the copyright line.
% \crdata{0-12345-67-8/90/12} will cause 0-12345-67-8/90/12 to appear in the copyright line.
%
% ---------------------------------------------------------------------------------------------------------------
% This .tex source is an example which *does* use
% the .bib file (from which the .bbl file % is produced).
% REMEMBER HOWEVER: After having produced the .bbl file,
% and prior to final submission, you *NEED* to 'insert'
% your .bbl file into your source .tex file so as to provide
% ONE 'self-contained' source file.
%
% ================= IF YOU HAVE QUESTIONS =======================
% Questions regarding the SIGS styles, SIGS policies and
% procedures, Conferences etc. should be sent to
% Adrienne Griscti (griscti@acm.org)
%
% Technical questions _only_ to
% Gerald Murray (murray@acm.org)
% ===============================================================
%
% For tracking purposes - this is V1.3 - OCTOBER 2002

\documentclass{sig-alternate}

\begin{document}
%
% --- Author Metadata here ---
\conferenceinfo{WOODSTOCK}{'97 El Paso, Texas USA}
%\CopyrightYear{2001} % Allows default copyright year (2000) to be over-ridden - IF NEED BE.
%\crdata{0-12345-67-8/90/01}  % Allows default copyright data (0-89791-88-6/97/05) to be over-ridden - IF NEED BE.
% --- End of Author Metadata ---

%\title{Interactive Indirect Illumination}
\title{Approximating Second-order Illumination in Real-time}
%\subtitle{[TODO]}
%\titlenote{A full version of this paper is available as
%\textit{Author's Guide to Preparing ACM SIG Proceedings Using
%\LaTeX$2_\epsilon$\ and BibTeX} at
%\texttt{www.acm.org/eaddress.htm}}}
%
% You need the command \numberofauthors to handle the "boxing"
% and alignment of the authors under the title, and to add
% a section for authors number 4 through n.
%
% Up to the first three authors are aligned under the title;
% use the \alignauthor commands below to handle those names
% and affiliations. Add names, affiliations, addresses for
% additional authors as the argument to \additionalauthors;
% these will be set for you without further effort on your
% part as the last section in the body of your article BEFORE
% References or any Appendices.

\numberofauthors{2}
%
% You can go ahead and credit authors number 4+ here;
% their names will appear in a section called
% "Additional Authors" just before the Appendices
% (if there are any) or Bibliography (if there
% aren't)

% Put no more than the first THREE authors in the \author command
\author{
%
% The command \alignauthor (no curly braces needed) should
% precede each author name, affiliation/snail-mail address and
% e-mail address. Additionally, tag each line of
% affiliation/address with \affaddr, and tag the
%% e-mail address with \email.
\alignauthor Robert Cochran\\
       \affaddr{Department of Computer Science}\\
       \affaddr{Clemson University}\\
       \affaddr{Clemson, SC 29634}\\
       \email{rcochra@cs.clemson.edu}
\alignauthor Jay Steele\\
       \affaddr{Department of Computer Science}\\
       \affaddr{Clemson University}\\
       \affaddr{Clemson, SC 29634}\\
       \email{jesteel@cs.clemson.edu}
}

%\additionalauthors{Additional authors: John Smith (The Th{\o}rv\"{a}ld Group,
%email: {\texttt{jsmith@affiliation.org}}) and Julius P.~Kumquat
%(The Kumquat Consortium, email: {\texttt{jpkumquat@consortium.net}}).}
\date{TODO}
\maketitle
\begin{abstract}
TODO.
\end{abstract}

% A category with the (minimum) three required fields
\category{TODO}[TODO]
%A category including the fourth, optional field follows...
%\category{D.2.8}{Software Engineering}{Metrics}[complexity measures, performance measures]

\terms{TODO}

\keywords{TODO}

\section{Introduction}
Approximating global illumination is key in adding realism to computer generated images.  The two main components of global illumination are direct illumination, which results directly from light sources, and indirect illumination, which results from light reflecting off surfaces.  Images that contain indirect illumination are much more realistic than those lacking it, but indirect illumination is very computationaly expensive.  Non-interactive global illumination algorithms exist that generate very realistic results, but the rendering times for each frame is measured in minutes and hours.  Interactive applications, which require per frame rendering times measured in milliseconds, can can easily compute direct illumination, but these applications have historically relied on precomputed light maps or other static techniques for approximating indirect illumination.

In this paper we present an algorithm for rendering dynamic scenes at interactive frame rates with direct and indirect illumination.  Our algorithm does not rely on any precomputations and supports fully dynamic scenes.  All light properties (position, shape, color) can change per frame.  The camera is free to move throughout the scene.  Specifically, we approximate second-order diffuse reflections (indirect illumination) and occulsion of these reflections.  Our work is an extension of \cite{keller:ir}.  To approximate diffuse reflections, our algorithm adds point lights to the scene at each frame.  Each of these new point lights, which we call indirect lights, approximate indirect illumination due to diffuse reflections.  Positioning these indirect lights is key in generating quality images.  In our discusion, we will assume one spotlight per scene, although our algorithm easily extends to multiple spotlights.  Our algorithm works in image space; therefore, the processing time per frame is largely independent of scene complexity.  First, we view the scene from the spotlight.  Like \cite{dachsbacher:rsm}, we store additional information from this pass into multiple textures.  Using a low-discrepancy sampling technique, we sample from these textures to determine our indirect light positions, directions, and colors.  Since calculating the illumination due to each indirect light is still compuationaly expensive, we provide a novel technique for reducing the average amount of work per indirect light while mainting quality results.  Our algorithm alos introduces a new technique for approximating occlusion of second-order diffuse reflections through a novel use of negative point lights.  Finally, we accumulate the illumination due to all indirect lights and negative lights and add this to the direct illumination to generate a complete frame.

In the next section, we provide an overview of global illumination and previous techniques, both interactive and non-interactive.  Then, we discuss the details of our algorithm.  We then present implementation details.  This is followed by results and comparisons.  Finally, we discuss current issues and future work.

\section{Previous Work}
Global illumination algorithms attempt to approximate the rendering equation, which describes how light flows in a vacuum.
\[
L_o(x, \vec{w}) = L_e(x,\vec{w})+\int_{\Omega}f_r(x,\vec{w}',\vec{w})L_i(x,\vec{w}')(\vec{w}'\cdot\vec{n})d\vec{w}'
\]

$L_o$ represents the outgoing light for a point and a direction.  It is the sum of any light emitted from the point in the direction, $L_e$, and any light reflected from the point in the direction.  This last quantity is the sum of all directions of all incoming light, $L_i$, multiplied by the BSSRDF (or BRDF?) (TODO explanation needed?), $f_r$, and the incoming angle.  The reflected light is a combination of direct light and indirect light, light that has bounced off other surfaces.

Many non-interactive rendering techniques exist for approximating global illumination.  These techniques range from radiosity \cite{TODO}, which is concerned with diffuse reflections, to photon mapping \cite{TODO}, which handles both specular and diffuse reflections.  These non-interactive techniques produces very realistic images.  But, because of the complexity of global illumination and the quality of results, each rendered frame can take minutes to hours to compute.

As GPU power increases, more and more algorithms become availble for approximating global illumination for interactive applications.  These methods attempt to balance speed and image quality.  Each method comes with its own set of restrictions, such as static lights or static objects.  The classical approach is to precompute light maps for static lights, which are then textured onto static objects.  Obviously, this greatly restricts the application.  Other algorithms converge to a solution, such as \cite{UNC}.   

In \cite{keller:ir}, the author introduced the idea of simulating indirect lighting with graphics hardware by placing new point lights in the scene, which we call indirect lights.  The locations of these indirect lights are determined by shooting particles from the light into the scene.  New indirect lights are positioned at the intersection of each particle and scene.  Multiple bounces, diffuse and specular, can be simulated.  The scene is rendered with shadows with each indirect lights, and the resulting images are accumulated to produce a final image with direct illumination and indirect illumination.

Recently, new techniques based on \cite{keller:ir} have been developed with the goal of achieving truly interactive frame rates while allowing for dynamic scenes.  In \cite{interleavedsampling}, the authors introduce a novel technique that reduces the workload of each indirect light while maintaining reasonable image quality.  But, they do rely on the precomputation of a kd-tree for calculating the intersection of particles and the scene..  \cite{dachsbacher:rsm} introduced the idea of sampling from the light's image space; thus, they avoid any precomputation necessary for intersection calculation.  This was extended in {\cite{dachsbacher:splatting} to allow fast local indirect illumination, including caustics, through splatting indirect lights.  This increases performance when only local indirect illumination is considered, but performanc degrades for scenes with global indirect illumination. (TODO ummm...better explanation?  should we diss other?)

TODO ambient occulsion? etc?
TODO do we need to talk about us one more time?

\section{Algorithm}

Our work is based on the work in \cite{keller:ir}.  Similarly, our algorithm approximates indirect illumination by positioning point lights, which we call indirect lights, throughout the scene.  We are only concerned with second-order diffuse reflections since these tend to contribute most to the final image quality \cite{tabellion:shrek}.  With this approach, the quality and the frame rate of an interactive scene is heavily dependent on the number of indirect lights created for each frame.  Our algorithm acheives interactive frame rates for complex scenes with many indirect lights per frame.  

We differ from previous approaches in three main ways.  First, we position our lights using an image space, low discrepency sampling approach. Second, we decrease the average work per pixl through the intelligent use of a low resolution approximation while maintaing high image quality.  Third, we approximate local second-order diffuse reflection occulsion through a novel use of negative lights.  Because we work in image space, our per frame processing time is largely independent of scene complexity.  In discussing our algorithm, we will assume one spotlight exists in the scene, but extending our algorithm to multiple spotlights is straightforward.

\subsection{Positioning Indirect Lights}

Our algorithm automatically places indirect lights into the scene for each frame.  Avoiding light flicker, which occurs when indirect lights are drastically altered from frame to frame, in a truly dynamic scene is a very hard problem; however, our algorithm avoids flicker due to temporal or camera changes.  Previous approaches have determined the locations of indirect lights by casting rays from the main light into the scene.  Using this technique, the intersection of each ray and scene designats a new indirect light position.  For interactive applications, preprocessing steps, such as kd-trees, have been used \cite{TODO} to achieve fast ray/scene intersection calculations.  But, relying on preprocessing steps severly limits scene interactivity and requires specific ray/geometry intersection routines. 

%\subsubsection{Rendering from the Light}

Instead of explicitly checking for ray/scene intersections, we simplify this necessary step by rendering the unlit scene from the spotlight's view into multiple textures.  Similar to \cite{dachsbacher:rsm, dachsbacher:splatting}, we exploit multiple render targets to store each pixel's world space position, normal, material information, and depth value into multiple textures, which we call light textures.  Each pixel in the position light texture represents a potential indirect light location; also, all possible indirect light locations are available in the position light texture.  This technique integrates well with shadow maps, which is why we also store depth information in this pass.
 
We now perform a low discrepency sampling from the light textures to determine the properties (position, direction, and diffuse color) of our indirect lights.  Similar to \cite{keller:ir}, our samples are based on the Halton sequence, which provides a uniformly distributed, quasi-random, low discrepancy samplimg pattern.  The quasi-random aspect allows our algorithm to avoid light flicker due to temporal changes or camera changes.  The indirect light positions will only vary frame to frame due to positional changes of scene objects or changes of the spotlight.  First, we generate points in $\Re^3$ uniformly on the unit sphere surrounding the scene's spotlight according to (TODO align correctly)
\[ %begin{align}
\varphi = 2\pi\Phi(2, i)
\]
\[
\delta = \arcsin(2\Phi(3,i) - 1)
\]
\[
s_i = (\sin(\varphi)\cos(\delta), \cos(\varphi)\cos(\delta), -\sin(\delta))
\]
\begin{eqnarray*}
\varphi &=& 2\pi\Phi(2, i)\\
\delta &=& \arcsin(2\Phi(3,i) - 1)\\
s_i &=& (\sin(\varphi)\cos(\delta), \cos(\varphi)\cos(\delta), -\sin(\delta))
\end{eqnarray*}

where $\Phi(j, i)$ is the $i$-th Halton point based on the prime number $j$.  We select the first $N$ sample points that fall into the cone of the spotlight, rejecting all others.  The accepted sample points are projected into the image space of the spotlight.  The projected sample points, which are in $\Re^2$, provide indexes into our light texture; these $N$ sample points allow us to determine the position, direction and color for each of our $N$ indirect lights. TODO figure from light?  


\subsection{Computing Indirect Illumination}

Computing the illumination due to the indirect lights requires treating each of the $N$ indirect lights as a point light.  In general, for each fragment, we must compute the direct illumination and indirect illumination, which leads to $N+1$ lighting calculations per fragment.  For a large $N$, computing the illumination for all indirect lights can not be accomplished in one pass on current hardware.  We exploit a deferred shading pass \cite{TODO:ds} to avoid repetitive calculations due to multiple passes and expensive calculations due to illuminating hidden fragments.  We render from the camera without lighting into multiple textures; for each pixel, we store its world space position, normal, and material information.  Deferred shading, when combined with our image space light position algorithm, allows our algorithm to be largely independent of scene complexity.  The actual scene is only rendered twice: once from the point of view of the spotlight and once from the camera view.

Computing the indirect illumination for many indirect lights per pixel using deferred shading is still too expensive when combined with our image space occlusion (TODO see below?).  But, a cheaper evaluation can be sufficient for calculating indirect illumination \cite{dachsbacher:rsm} for much of each frame.  Using the technique that follows, the average amount of work per indirect light is reduced while maintaining image quality.

First, we downsample the deferred shading textures to a lower resolution.  We then compute a low resolution, indirect illumination texture using these low resolution, deferred shading textures and all $N$ indirect lights.  We up sample this low resolution, indirect illumination texture to form the basis of our final indirect illumination texture.  For each pixel in our final image, we determine if the corresponding low resolution, indirect illumination value is a good enough approximation.  Each pixel's normal and world space position is compared with the normal and world space position used to calculate the low resolution, indirect illumination value.  For all pixels in which this difference is above a threshold, we calculate in indirect illumination from the high resolution, deferred shading textures and all $N$ indirect lights.

This technique is effective provided the indirect illumination for a majority of pixels can be approximated by the low resolution pass.  Obviously, this is scene dependent.  TODO figure for effectiveness

\subsection{Local occlusion}
We now introduce a novel method of visibility calculation
and describe how it provides an effective means of improving
indirect illumination.
%Crow introduced the shadow volume technique.
%Every object that casts a shadow has an associated shadow volume, 
%defined as the region of space the object occludes from a light.
%Our shadowing method employs light sources 
%which emit negative light, that is, light sources which remove
%light from the scene. 
In our algorithm, we are given a set of surface points $S$ and
a set of indirect lights $L$.
%, and we wish to calculate the
%the total contribution of the lights in $L$ at each point $p \in S$.
%A ray from a light in $L$ should terminate
%upon reaching a surface point $p$. 
%However, to render the scene
To render the scene
interactively, 
our lighting calculation makes the simplifying assumption 
that all indirect lights are visible.    
We consider each surface point independently,
and calculate the total contribution of each light at the points $p \in S$, 
ignoring whether or not light reaching $p$ is actually occluded by other points.
By doing so, we sacrifice realism in scenes which 
contain occluding surfaces. We could calculate a shadowmap
for each light in $L$,
but the use of shadowmaps requires an additional depth render
from the point of view of each light,
and because our algorithm uses
a large number of lights to provide second-order illumination,
this would be prohibitively expensive. 
%We use shadow mapping to calculate an object's visibility
%from direct light sources. 
%Consequently, we do not include a visibility 
%component in the indirect lighting.
Other interactive indirect light
algorithms disregard visibility of indirect lights for the sake of interactivity.
An ambient occlusion term is used by [TODO] to provide 
information on a fragment's local visibility, but this
requires a pre-computation step and static models, and 
is an approximation of local visibility only.  
We propose a novel method of visibility calculation 
which employs light sources 
that emit negative light, i.e., light sources that remove
light from the scene. By carefully placing negative lights into the scene, 
we improve the realism and accuracy of our indirect illumination, 
with just an increase in the size of $L$ and a small amount of extra
computation to determine the negative light positions. 

We begin with a description of how negative lights work. 
Suppose we have an ideal point light $l$ 
with position $P_l$ and direction $D_l$, and a set of points $S$.
A light 
ray emanating from $l$ should terminate
upon reaching a surface point $p \in S$.  
In real-time applications,
%which use rasterization, rather than ray tracing,
reproducing this effect generally requires an auxiliary  
data structure, which is used to determine whether there is a clear path from $l$ to $p$.
In our technique, we place negative lights  $l_{neg}$ along the surfaces
described by the points in $S$ which are illuminated by $l$. 
Denoting $L_p$ as the direction vector to the
light $l$ from each surface point,  each $l_{neg}$ is given 
direction equal $-L_p$, intensity equal to the 
negative of $l$'s intensity and 
a spotlight angle $\Theta \leq \epsilon$. 
We define the points in the subset of $S$ which have a clear path to $l$ as $S_{light}$,
and the subset of points that do not, as $S_{dark}$.
After summing the contribution of 
all lights in the scene at each point $p \in S$, 
%only points which have a clear path to $l$
all points in $S_{light}$ will be correctly illuminated.
Not all points in $S_{dark}$   
will be receivers of negative light and thus illumination will be subtracted from 
a fraction of the points in $S_{dark}$.
However, as the number of negative lights approaches infinity, all points in $S_{dark}$
will be receivers of negative light. 
For a finite number of negative lights, we can increase 
the probability covering of all points in $S_{dark}$ by increasing
the spotlight angle $\Theta$ of each $l_{neg}$ at
the expense of possibly covering points not in $S_{dark}$.

We adapt and extend the negative light algorithm
to add an approximate visibility component to our indirect illumination.
\iffalse 
We first describe a basic method for 
approximating visibility with negative lights
in a deferred shading environment.  
Given $k$ point lights and position, normal and diffuse buffers ($Pos$, $Norm$, and $Diff$) of size
$m x n$, for each scene fragment,
the illumination value is equal to the
sum of the contributions of the $k$ point lights and $m*n*k$
negative point lights. Each of the $k$ lights has an
associated $m*n$ negative lights. The parameters of the
negative lights are given by scene buffers.   
The position of negative light $L_{ijk}$ is
$Pos(i,j)$, its direction is the vector from the $k$th light's position to $Pos(i,j)$
and its intensity is equal to negative intensity of the $k$th light. 
\fi
- describe a method for determining negative light positions in image space.   
- describe how indirect neg light positions are found and used.

\iffalse 
Our algorithm, like other interactive indirect light algorithms, 
does not calculate shadows for the indirect lights.  
With shadow maps, this would require rendering the scene from the each 
indirect light, which is too expensive for interactive applications with complex geometry.  
We have developed a technique for approximating occlusion from indirect 
lights through a novel application of negative lights, which subtract light from fragments.

Similar to indirect lights, we position 
negative indirect lights throughout the scene.  
First, we create generate negative light textures.  Negative light textures

Math? behind idea
Render negative illumination to texture (average, etc)
For each object that we desire neg. illumination for, project bounding box to image space of camera
Sample from within the projected bounding box, placing neg. pt. lights
Subtract light from indirect illumination texture according to neg. pt. lights
Example images

TODO Robby
\fi
\subsection{Final Gather}

TODO after above section

\section{Implementation}
We implemented our algorithm using OpenGL and GLSL.  Since our algorithm requires rendering to multiple floating point targets, we exploited the following extensions: GL\_ARB\_draw\_buffers, GL\_EXT\_framebuffer\_object, and GL\_NV\_float\_buffer.  

Our alogrithm is implemented in multiple passes, with only two passes requiring the scene to be rendered.  The first pass renders the scene from the view of the spotlight.  In this pass, we store each visible fragment's world space position, normal, and material information into three three 16-bit float textures, each with four components.  Although 32-bit float textures are available, the additional precision does not provide significant benefit to overcome the extra memory bandwidth requirements which negatively affects performance.  A depth texture is also created in this pass, which is used to generate shadows from the main spotlight.

A deferred shading pass, from the camera, is next.  World space coordinates, normal, and material properties for each visible fragment are stored in three 16-bit float textures, each with four components.  We also create a set of lower resolution textures by down sampling the full resolution, deferred shading textures.  

Now, we compute a discontinuity buffer, which is used in determining the pixels that a low resolution, indirect illumination approximation is suitable.  For pixels in which the full resolution, deferred shading textures values (normal and world space position) differ greatly with the low resolution, deferred shading values, a red pixel is stored in the discontinuity buffer.  Otherwise, a black pixel is written.  The discontinuity buffer is then downsampled to a lower resolution.  This lower resolution discontinuity buffer is transferred to the cpu, and it is vital for the next pass.  Transferring and analyzing a full resolution discontinuity buffer aversely affects performance.

Our indirect illumination texture is computed in multiple steps.  First, we compute a low resolution version from the camera view.  This texture is up sampled to full resolution.  For every red pixel in the discontinuity buffer, we compute the indirect illumination for the corresponding pixels in the full resolution texture.  We restrict the shader to the corrects pixels by drawing quadrilaterals.

TODO negative fun
TODO final accumulation

\section{Results}
Pretty pictures

Frame rates

Comparison

\section{Conclusion}
We rock...not really.

\balancecolumns % GM July 2000
% That's all folks!
\end{document}

