\documentclass[11pt, a4paper]{article}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage[cm, empty]{fullpage}
\usepackage{amsmath}
\usepackage{hyperref}
\setlength{\parindent}{1cm}
\usepackage{graphicx}
\usepackage{subfig}
%\usepackage{setspace}
%\onehalfspacing


\begin{document}
%\title{CS-440 ACG - Exercise 2}
%\author{Mihai Moraru, Kostiantyn Pupykin}
%\date{December 7, 2011}
%\maketitle
\begin{center} \huge CS-440 ACG -- Exercise 2\end{center}
\begin{center} Mihai Moraru, Kostiantyn Pupykin\end{center}

\section*{}
We dedicated approximately 10 hours to exercise 2.

Problems encountered:
\begin{itemize}
	\item forgot that a Ray is constructed from a pair $<point, direction>$ and not a pair $<point, point>$
\end{itemize}

\section*{2.2 Sampling the light sources}
We added a few optimizations to the algorithm suggested in the handout:
\begin{itemize}
	\item Compute the partial sum of the light areas only once, in \texttt{setScene()}, in order to avoid duplicating the effort for each sample.
	The values are stored in the member \texttt{vector<double> partialAreaSum}.
	\item As the vector of partial sums is sorted, we realized it would be more effiecient to do a binary search in $O(log N)$ than to step through the whole list in $O(N)$.
\end{itemize}

The number of lights, N, is very small in this scene, so the benefit cannot be used at it's full potential. This gaves us the idea for 2.5, where the number of MeshTriangles is much bigger.

\section*{2.3 Estimating direct lighting}
We added the method \texttt{Vector4 integrateLightSampling(const Ray\& ray)} in order to implement the importance sampling for this exercise.
Because the PDF is now $p(f) = \frac{1}{areaSum}$, we multiply the value obtained after the inner loop by areaSum. This represents the total area of the lights instead of the $2\pi$ we used for the uniform hemisphere sampling.

\section*{2.4 Comparison to directional sampling}
	We rendered the LGG scene using nSamples = 1000. The rendering took 188 seconds.
	%With OMP enabled, the same scene (1000 samples) took 85 seconds to render.
	Compared to directional sampling, the visual results are much better (lower variance) and the rendering is faster.

%\begin{figure}[h]
%        \caption{Comparison to directional sampling}
%        \centering
%        \subfloat[integrateSamplingBRDF, nSamples = 1000, 247s] {
%                \includegraphics[type=png,ext=.png,read=.png,width=.45\textwidth]{../exercise_01/result_1.6}
%        }
%        \qquad
%        \subfloat[integrateSamplingLights, nSamples = 1000, 188s] {
%                \includegraphics[type=png,ext=.png,read=.png,width=.45\textwidth]{result_2.4}
%        }
%\end{figure}

\section*{2.5 Speed it up}
Provided we understood it correctly, the method proposed in the handout splits the interval $[0, areaSum)$ into $n$ intervals of equal size and (roughly) assigns each of the N triangles to one interval.
This amounts to having chunks of $\lfloor \frac{N}{n} \rfloor$ triangles each.
For each lookup it \emph{"computes directly"} the chunk corresponding to the value of $r$.
Then it continues with a linear lookup inside that chunk. This amounts to a complexity of $O(n + \frac{N}{n})$.
The best value of $n$ we would choose if we were to continue implementing this method would be $n = \sqrt{N}$.
This gives a lookup time of $O(\sqrt{N})$.

However, we realized that we can apply exactly the same algorithm from 2.2 but this time to the whole set of triangles.
We compute a partial sum of the areas of the triangles and then we binary search the triangle corresponding to $r$.
This has a complexity of $O(N)$ for the initialization (in \texttt{setScene()}) and $O(log N)$ for each lookup.

The resulting rendering with nSamples = 1000 took 166 seconds (with OpenMP).

%\begin{figure}[h]
%        \caption{}
%        \centering
%        \subfloat[sampleLightsFast, nSamples = 1000, 166 s] {
%                \includegraphics[type=png,ext=.png,read=.png,width=.45\textwidth]{result_2.5}
%        }
%\end{figure}

\section*{Comments}
Trying to find a faster way to do inversion sampling, we found out in \emph{Physically based rendering} that the authors used the same method of binary searching the partial sums (edition 2, section 13.3.1).
Their explanations helped us better understand inversion sampling and showed us that we had actually found a way to compute the inverse of the CDF (in the discrete, 1D case).

\end{document}
