% octresults.tex

% Created on: Sep 8, 2008
%      Author: hari



%{{{ results

\section{Results}
\label{sec:results}

In this section we present performance and scalability analyses for the construction, balancing and meshing of octrees. We also show results for the effectiveness of mutual information computed using octrees.

\subsection{Octree construction and balancing}
The performance of the proposed algorithms is evaluated by a number of
numerical experiments, including fixed-size and isogranular scalabilty
analysis. The algorithms were implemented in C++ using the MPI
library. A variant of the sample sort algorithm was used to sort the
points and the octants, which incorporates a parallel bitonic sort to
sort the sample elements as suggested in \cite {karypis03}. PETSc
\cite{petsc-web-page} was used for profiling the code. All tests were
performed on the Pittsburgh Supercomputing Center's TCS-1 terascale
computing HP AlphaServer Cluster comprising of 750 SMP ES45 nodes.
Each node is equipped with four Alpha EV-68 processors at 1 GHz and 4
GB of memory. The peak performance is approximately 6 Tflops, and
the peak performance for the top-500 LINPACK benchmark is
approximately 4 Tflops.  The nodes are connected by a Quadrics
interconnect, which delivers over 500 MB/s of message-passing
bandwidth per node and has a bisection bandwidth of 187 GB/s. In our
tests, we have used 4 processors per node wherever possible.

We present results from an experiment that we conducted to highlight the
advantage of using the proposed two-stage method for intra-processor balancing.
Also, we present fixed-size and isogranular scalability analysis results.

\subsubsection{Test Data} 
\label{sec:data}
Data of different sizes were generated for three different
spatial distributions of points; Gaussian, Log-normal and Regular. The Regular
distribution corresponds to a set of points distributed on a Cartesian grid. Datasets of increasing sizes were generated
for all three distributions so that they result in balanced octrees
with octants ranging from $10^6$(1M) to $10^9$(1B). All of the experiments were carried
out using the same parameters: $D_{max} = 30$ and $N_{max}^p = 1$. Only the number
 and distribution of points were varied to produce the various
 octrees. The fixed size scalability analysis was performed by selecting
 the 1M, 32M, and 128M Gaussian point distributions to represent small, medium and large
problems. We provide the input and output sizes for the construction and balancing algorithms 
in Table \ref{tab:numbers}. The output of the construction algorithm is the input for the balancing algorithm. 

\begin{table*}[p] 
  \begin{center}
  \footnotesize
  \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|} \hline 
   & \multicolumn{4}{c|}{Gaussian} & \multicolumn{4}{c|}{Log-Normal} & \multicolumn{3}{c|}{Regular} \\ 
   \cline{2-12}
       &        & \multicolumn{2}{|c|}{Balancing} & Max.  &  & \multicolumn{2}{|c|}{Balancing}   &    &   &  &  \\
    \cline{3-4} \cline{7-8}
    Size      & Points &  Leaves  &  Leaves   & Level &  Points &  Leaves & Leaves & $\mathcal{L}^{\ast}$ & Points & Leaves & $\mathcal{L}^{\ast}$ \\
              &        &  before &  after  & ($\mathcal{L}^{\ast}$)  &   & before & after &    &    &    & \\\hline              
    1M        & 180K   & 607K     & 0.99M    &  14   & 180K   &  607K   &  0.99M  &  13    & 0.41M   & 0.99M  & 7 \\
    2M        & 361K   & 1.2M     & 2M       &  15   & 361K   &  1.2M   & 2M      &  14    & 2M      & 2M     & 7 \\
    4M        & 720K   & 2.4M     & 3.9M     &  14   & 720K   &  2.4M   & 3.9M    &  15    & 2.4M    & 4.06M  & 8 \\
    8M        & 1.5M   & 4.9M     & 8.0M     &  16   & 1.5M   &  4.9M   & 8.1M    &  16    & 3.24M   & 7.96M  & 8 \\
    16M       & 2.9M   & 9.7M     & 16M      &  16   & 2.9M   &  9.7M   & 16M     &  16    & 16.8M   & 16.8M  & 8 \\
    32M       & 5.8M   & 19.6M    & 31.9M    &  17   & 5.8M   &  19.6M  & 31.8M   &  17    & 19.3M   & 32.5M  & 9 \\
    64M       & 11.7M  & 39.3M    & 64.4M    &  18   & 11.7M  &  39.3M  & 64.7M   &  17    & 25.9M   & 63.7M  & 9 \\
    128M      & 23.5M  & 79.3M    & 0.13B    &  19   & 23.5M  &  79.4M  & 0.13B   &  19    & 0.13B   & 0.13B  & 9 \\
    256M      & 47M    & 0.16B    & 0.26B    &  19   & 47M    &  0.16B  & 0.26B   &  19    & 0.15B   & 0.26B  & 10 \\
    512M      & 94M    & 0.32B    & 0.52B    &  20   & 94M    &  0.32B  & 0.52B   &  20    & 0.17B   & 0.34B  & 10 \\ 
    1B        & 0.16B  & 0.55B    & 0.91B    &  21   & 0.16B  &  0.55B  & 0.91B   &  20    & 1.07B   & 1.07B  & 10 \\ \hline
  \end{tabular}
\end{center}	
 \caption{Input and output sizes for the construction and balancing algorithms for the scalability experiments on Gaussian, Log-Normal, and Regular point distributions. The output of the construction algorithm is the input for the balancing algorithm. All the octrees were generated using the same parameters: $D_{max} = 30$ and $N_{max}^p = 1$; differences in the number and distributions of the input points result in different octrees for each case. The maximum level of the leaves for each case is listed. Note that none of the leaves produced were at the maximum permissible depth ($D_{max}$). This depends only on the input distribution. Regular point distributions are inherently balanced, and so we report the number of octants only once.\label{tab:numbers}}
 \end{table*}


\subsubsection{Comparison between different strategies for the local balancing stage}
\label{sec:ccr}
In order to assess the advantages of using a two-stage approach for local
balancing over existing methods, we compared the runtimes on different problem
sizes. Since the comparison was for different local-balancing strategies,
it does not involve any communication and hence was evaluated on a shared memory machine. We compared our two-stage approach, discussed
in Section \ref{sec:combo}, with two other approaches; the first approach is the 
prioritized ripple propagation idea applied on the
entire local domain \cite{tu05}, and the second approach is to use ripple
propagation in 2 stages, where the local domain is first split into coarser
blocks\footnote{The same partitioning strategy as used in our two-stage
algorithm was used to obtain the coarser blocks.} and ripple propagation is
applied first to each local block and then repeated on the boundaries of all
local blocks. Fixed size scalability analysis was performed to compare the
above mentioned three approaches with problem sizes of 1, 4, 8, and 16 million
octants. The results are shown in in Figure \ref{fig:smy}. All three approaches
demonstrate good fixed size scalability, but the proposed two-stage approach has a lower 
absolute runtime.

\begin{figure}[p]
  \begin{center}
    \subfloat[] { 
    \includegraphics[angle=90, width=0.48\textwidth]{PDF_Figures/smy1}
}
    \subfloat[] { 
 		\includegraphics[angle=90, width=0.48\textwidth]{PDF_Figures/smy4}
 }\\
    \subfloat[] {  
  	\includegraphics[angle=90, width=0.48\textwidth]{PDF_Figures/smy8}
  }
    \subfloat[] {
		\includegraphics[angle=90, width=0.48\textwidth]{PDF_Figures/smy16}
	}
  \end{center}
  \caption{Comparison of three different approaches for balancing linear octrees {\tt(a)} for a Gaussian distribution of {\tt1M} octants, {\tt(b)} for a Gaussian distribution of {\tt4M} octants, {\tt(c)} for a Gaussian distribution of {\tt8M} octants, and {\tt(d)} for a Gaussian distribution of {\tt16M} octants.}
  \label{fig:smy}
\end{figure}

\subsubsection{Scalability analysis}
\label{sec:scal}
In this Section, we provide experimental evidence of the good
scalability of our algorithms. We present both fixed-size and
isogranular scalability analysis. Fixed size scalability was
performed for different problem sizes to compute the speedup when 
the problem size is kept constant and the number of
processors is increased. Isogranular scalability analysis is
performed by tracking the execution time while increasing the problem
size and the number of processors proportionately. By maintaining the
problem size per processor (relatively) constant as the number of
processors is increased, we can identify communication problems
related to the size and frequency of the messages as well as global
reductions and problems with algorithmic scalability.

One of the important components in our algorithms is the sample sort
routine, which has a complexity of $\mathcal{O}(\frac{N}{n_p}\log
\frac{N}{n_p} + n_p^2\log n_p)$ if the samples are sorted using a
serial sort. This causes problems when $\mathcal{O}(N) < \mathcal{O}(n_p^3)$ as the
serial sort begins to dominate and results in poor scalability. For
example, at $n_p=1024$ we would require $\frac{N}{n_p} > 10^6$ to obtain good
scalability. This presents some problems as it becomes difficult to
fit arbitrarily large problems on a single processor. A solution, first proposed in \cite{karypis03},
is to sort the samples using the parallel bitonic sort. This approach reduces the complexity of sorting to
 $\mathcal{O}(\frac{N}{n_p} \log \frac{N}{n_p} + n_p \log n_p)$. 

Isogranular scalability analysis was performed for all three
distributions with an output size of roughly 1M octants per processor,
for processor counts ranging from 1 to 1024. Wall-clock timings,
 speedup, and efficiency for the isogranular analysis for the three 
 distributions are shown in Figures \ref{fig:isoG}, \ref{fig:isoLN}, 
 and \ref{fig:isoRG}. 
 
Since the regularly spaced distribution is inherently balanced, the input point
sizes were much greater for this case than those for Gaussian and
Log-normal distributions. Both the Gaussian and Log-normal
distributions are imbalanced; and in Table \ref{tab:numbers}, we can see that, on average,
the number of unbalanced octants is three times the number of input points and the
number of octants doubles after balancing. For the regularly spaced
distribution, we observe that in some cases the number of octants is
the same as the number of input points (2M, 16M, 128M and 1B). These
are special cases where the resulting grid is a perfect regular grid. Thus, while
 both the input and output grain sizes remain almost constant for the Gaussian
 and LogNormal distributions, only the output grain size remains constant for the
 Regular distribution. Hence, the trend for the regular distribution is a
 little different from those for the Gaussian and LogNormal distributions.
  
The plots demonstrate the good isogranular scalability of the
algorithm. We achieve near optimal isogranular scalability for all
three distributions ($50$s per $10^6$ octants per processor for the
Gaussian and Log-normal distributions and $25$s for the regularly
spaced distribution.).

\begin{figure}
  \begin{center}
    \includegraphics[width=\textwidth]{PDF_Figures/isoGran}
  \end{center}
  \caption{Isogranular scalability for a Gaussian distribution of
  {\tt1M} octants per processor. From left to right, the bars indicate
  the time taken for the different components of our algorithms for
  increasing processor counts. The bar for each processor is
  partitioned into {\tt4} sections. From top to bottom, the sections
  represent the time taken for {\tt(1)} communication (including
  related pre-processing and post-processing) during balance
  refinement {\tt(Algorithm \ref{alg:parBal})}, {\tt(2)} balancing
  across intra and inter processor boundaries {\tt(Algorithm
  \ref{alg:ripple})}, {\tt(3)} balancing the blocks {\tt(Algorithm
  \ref{alg:effConBal})}, and {\tt(4)} construction from points
  {\tt(Algorithm \ref{alg:p2o})}.}
  \label{fig:isoG}
\end{figure}

\begin{figure}
  \begin{center}
	\includegraphics[width=\textwidth]{PDF_Figures/isoLN}
  \end{center}
  \caption{Isogranular scalability for a Log-normal distribution of
  {\tt1M} octants per processor. From left to right, the bars indicate
  the time taken for the different components of our algorithms for
  increasing processor counts. The bar for each processor is
  partitioned into {\tt4} sections. From top to bottom, the sections
  represent the time taken for {\tt(1)} communication (including
  related pre-processing and post-processing) during balance
  refinement {\tt(Algorithm \ref{alg:parBal})}, {\tt(2)} balancing
  across intra and inter processor boundaries {\tt(Algorithm
  \ref{alg:ripple})}, {\tt(3)} balancing the blocks {\tt(Algorithm
  \ref{alg:effConBal})}, and {\tt(4)} construction from points
  {\tt(Algorithm \ref{alg:p2o})}.}
  \label{fig:isoLN}
\end{figure}

\begin{figure}
  \begin{center}
	\includegraphics[width=\textwidth]{PDF_Figures/isoRG}
  \end{center}
  \caption{Isogranular scalability for a Regular distribution of
  {\tt1M} octants per processor. From left to right, the bars indicate
  the time taken for the different components of our algorithms for
  increasing processor counts. The bar for each processor is
  partitioned into {\tt4} sections. From top to bottom, the sections
  represent the time taken for {\tt(1)} communication (including
  related pre-processing and post-processing) during balance
  refinement {\tt(Algorithm \ref{alg:parBal})}, {\tt(2)} balancing
  across intra and inter processor boundaries {\tt(Algorithm
  \ref{alg:ripple})}, {\tt(3)} balancing the blocks {\tt(Algorithm
  \ref{alg:effConBal})} and {\tt(4)} construction from points
  {\tt(Algorithm \ref{alg:p2o})}. While both the input and output
  grain sizes remain almost constant for the Gaussian and LogNormal
  distributions, only the output grain size remains constant for the
  Uniform distribution. Hence, the trend seen in this study is a
  little different from those for the Gaussian and LogNormal
  distributions.}
  \label{fig:isoRG}
\end{figure}

Fixed size scalability tests were also performed for three problem set
sizes, small (1 million points), medium (32 million points), and large
(128 million points), for the Gaussian distribution. These results are
plotted in Figures \ref{fig:fsS}, \ref{fig:fsM} and \ref{fig:fsL}.

\begin{figure}
  \begin{center}
    \includegraphics[width=\textwidth]{images/fs1}
  \end{center}
  \caption{Fixed size scalability for a Gaussian distribution of {\tt1M}
  octants. From left to right, the bars indicate the time taken for
  the different components of our algorithms for increasing processor
  counts. The bar for each processor is partitioned into {\tt2}
  columns, which are further subdivided. The left column is subdivided
  into {\tt2} sections and the right column is subdivided into {\tt6}
  sections. The top and bottom sections of the left column represent
  the total time taken for {\tt(1)} balance refinement {\tt(Algorithm
  \ref{alg:parBal})} and {\tt(2)} construction {\tt(Algorithm
  \ref{alg:p2o})}, respectively. From top to bottom, the sections of
  the right column represent the time taken for {\tt(1)} balancing
  across intra and inter processor boundaries {\tt(Algorithm
  \ref{alg:ripple})}, {\tt(2)} balancing the blocks {\tt(Algorithm
  \ref{alg:effConBal})}, {\tt(3)} communication (including related
  pre-processing and post-processing) during balance refinement,
  {\tt(4)} local processing during construction, {\tt(5)} {\tt
  BlockPartition}, and {\tt(6)} {\tt Sample Sort}.}
  \label{fig:fsS}
\end{figure}

\begin{figure}
  \begin{center}
	\includegraphics[width=\textwidth]{images/fs32}
  \end{center}
  \caption{Fixed size scalability for a Gaussian distribution of
  {\tt32M} octants. From left to right, the bars indicate the time
  taken for the different components of our algorithms for increasing
  processor counts. The bar for each processor is partitioned into
  {\tt2} columns, which are further subdivided. The left column is
  subdivided into {\tt2} sections and the right column is subdivided
  into {\tt6} sections. The top and bottom sections of the left column
  represent the total time taken for {\tt(1)} balance refinement
  {\tt(Algorithm \ref{alg:parBal})} and {\tt(2)} construction
  {\tt(Algorithm \ref{alg:p2o})}, respectively. From top to bottom,
  the sections of the right column represent the time taken for
  {\tt(1)} balancing across intra and inter processor boundaries
  {\tt(Algorithm \ref{alg:ripple})}, {\tt(2)} balancing the blocks
  {\tt(Algorithm \ref{alg:effConBal})}, {\tt(3)} communication
  (including related pre-processing and post-processing) during
  balance refinement, {\tt(4)} local processing during construction,
  {\tt(5)} {\tt BlockPartition}, and {\tt(6)} {\tt Sample Sort}.}
  \label{fig:fsM}
\end{figure}

\begin{figure}
  \begin{center}
	\includegraphics[width=\textwidth]{images/fs128}
  \end{center}
  \caption{Fixed size scalability for a Gaussian distribution of
  {\tt128M} octants. From left to right, the bars indicate the time
  taken for the different components of our algorithms for increasing
  processor counts. The bar for each processor is partitioned into
  {\tt2} columns, which are further subdivided. The left column is
  subdivided into {\tt2} sections and the right column is subdivided
  into {\tt6} sections. The top and bottom sections of the left column
  represent the total time taken for {\tt(1)} balance refinement
  {\tt(Algorithm \ref{alg:parBal})} and {\tt(2)} construction
  {\tt(Algorithm \ref{alg:p2o})}, respectively. From top to bottom,
  the sections of the right column represent the time taken for
  {\tt(1)} balancing across intra and inter processor boundaries
  {\tt(Algorithm \ref{alg:ripple})}, {\tt(2)} balancing the blocks
  {\tt(Algorithm \ref{alg:effConBal})}, {\tt(3)} communication
  (including related pre-processing and post-processing) during
  balance refinement, {\tt(4)} local processing during construction,
  {\tt(5)} {\tt BlockPartition}, and {\tt(6)} {\tt Sample Sort}.}
  \label{fig:fsL}
\end{figure}

%}}}

%{{{ peformance
\subsection{Octree Meshing}
\label{sec:SCresults}

In this section we present numerical results for the tree
construction, balancing, meshing and matrix vector multiplication for a number
of different cases. The algorithms were implemented in C++ using the MPI
library. PETSc \cite{petsc-web-page} was used for profiling the code. We consider two point distribution cases: a
regular grid one, to directly compare with structured grids; and a
Gaussian distribution which resembles a generic non-uniform
distribution. In all examples we discretized a variable-coefficient
linear elliptic operator. We used piecewise constant coefficients for both the
 Laplacian and Identity operators. Material properties for the element were 
stored in an independent array, rather than within the octree data structure.
 
First we test the performance of the code on a sequential machine.  
We compare with a regular grid implementation with direct indexing (the nodes
are ordered in lexicographic order along the coordinates). The results are presented in Table \ref{tab:seqODA}.
We report construction times, and the total time for 5 matrix vector
multiplications. Overall the code performs quite well. Both the meshing time and the time for
performing the \texttt{MatVecs} are not sensitive to the distribution. The \texttt{MatVec} time is 
only $50\%$ more than that for a regular grid with direct indexing, about five seconds for four million octants.

\begin{table}
\small
%\scriptsize
	\centering
		\begin{tabular}{|l|l|l|l|l|l|l|}\hline
		Problem & Regular& \multicolumn{4}{c|}{Octree Mesh}\\\cline{3-6}
		Size &  Grid	&	\multicolumn{2}{c|}{Uniform} & \multicolumn{2}{c|}{Gaussian} \\\cline{3-6}
				&		MatVec& Meshing	& MatVec &	Meshing & MatVec\\\hline
256K &	1.08  &	4.07  & 1.62	&	4.34  & 1.57 \\
512K &	2.11  &	8.48  & 3.18	&	8.92  & 3.09 \\
1M	 &	4.11  &	17.52 & 6.24	&	17.78 & 6.08  \\
2M	 &	8.61  &	36.27 & 11.13	&	37.29 & 12.33 \\ 
4M	 &	17.22 &	73.74 & 24.12	&	76.25 & 24.22 \\\hline	 
%256K &     2.1483 &   5.26415   &  2.3994  &   6.13723  &   5.5050\\
%512K  &    4.3557 &   10.3013   &   5.2296 &   12.7178  &    10.560\\
%1M   &   8.1822   &  21.4262    &   9.9921 & 25.8099   &     20.753\\
%2M   &     17.247 &  44.2938   &    19.119 & 53.618    &     43.046\\
%4M   &   34.251   & 90.1725   &     39.541 & 109.351   &      82.198\\\hline
		\end{tabular}
%%
\caption{The time to construct ({\bf Meshing}) and perform 5 matrix-vector
multiplications ({\bf MatVec}) on a single processor for increasing
problem sizes. Results are presented for Gaussian distribution and for
uniformly spaced points. We compare with matrix-vector multiplication on a regular grid (no indexing)
having the same number of elements and the same discretization (trilinear elements). We discretize a variable 
coefficient (isotropic) operator. All wall-clock times are in seconds. The runs took place on a 2.2 GHz, 32-bit Xeon box.
The sustained performance is approximately 400 MFlops/sec for the structured grid. For the uniform and Gaussian distribution of points, the sustained performance is approximately 280 MFlops/sec. 
\label{tab:seqODA}}
\end{table}

\begin{table*}
	\centering
		\begin{tabular}{|l|l|l|l|l|l|}\hline
		Problem Size & 	\multicolumn{2}{c|}{Uniform Distribution} & \multicolumn{3}{c|}{Gaussian Distribution} \\\cline{2-6}
			     &   Points & Octants & Points & Unbalanced & Balanced\\\hline
1M 	&  405.2K   &	994.9K   &  179.9K & 607.3K  & 995.37K \\
2M 	&  2.1M     &	2.1M     &  360.9K & 1.21M   & 2M	\\
4M 	&  2.41M    &	4.07M	 &  720K   & 2.43M   & 3.97M  	\\
8M 	&  3.24M    &	7.96M	 &  1.47M  & 4.91M   & 8.03M 	\\
16M    &  16.77M   & 	16.77M	 &  2.89M  & 9.69M   & 16.03M	\\
32M    &  19.25M   &	32.53M   &  5.8M   & 19.61M  & 31.89M	\\
64M    &  25.93M   & 	63.67M	 &  11.72M & 39.28M  & 64.39M 	\\
128M   &  134.22M  &	134.22M  &  23.5M  & 79.29M  & 130.62M	\\
256M   &  153.99M  &	260.24M  &  47M    & 158.63M & 256.78M	\\
512M   &  207.47M  &	509.39M  &  94M    & 315.19M & 519.11M \\
1B 	&  1.07B    &	1.07B	 &  188M   & 635.08M & 1.04B	\\
2B 	&   1.96B   &     2B      &  376M   & 1.26B   & 2.05B	\\
4B	&   -       &     -      &  752M   & 2.52B   & 4.16B	\\\hline
\end{tabular}
%%
\caption{Input and output sizes for the construction and balancing
algorithms for the iso-granular scalability experiment on Gaussian
and uniform point distributions.}
\label{tab:probsize}
\end{table*}

\begin{figure*}
  \begin{center}
    %\includegraphics[width=0.9\textwidth]{gaussian}
	\includegraphics{isoGauss}
  \end{center}
%%
\caption{Isogranular scalability for Gaussian distribution of
\texttt{1M} octants per processor. From left to right, the bars indicate
the time taken for the different components of our algorithms for
increasing processor counts. The bar for each processor is partitioned
into \texttt{4} sections. From top to bottom, the sections represent the
time taken for \texttt{(1)} performing 5 Matrix-Vector multiplications,
\texttt{(2)} Construction of the octree based mesh, \texttt{(3)} balancing the
octree and \texttt{(4)} construction from points .}  
\label{fig:SCisoG}
\end{figure*}

\begin{figure*}
  \begin{center}
    %\includegraphics[width=0.9\textwidth]{uniform}
	\includegraphics{isoReg}
  \end{center}
\caption{Isogranular scalability for uniformly spaced points with
\texttt{1M} octants per processor. From left to right, the bars indicate
the time taken for the different components of our algorithms for
increasing processor counts. The bar for each processor is partitioned
into \texttt{4} sections. From top to bottom, the sections represent the
time taken for \texttt{(1)} performing 5 Matrix-Vector multiplications,
\texttt{(2)} Construction of the octree based mesh, \texttt{(3)} balancing the
octree and \texttt{(4)} construction from points.  
%While both the input and output grain sizes remain almost constant for the Gaussian distribution, only the output grain size remains constant for the Uniform distribution. Hence, the trend seen in this study is a little different from those for the Gaussian distribution.
}
\label{fig:SCisoRG}
\end{figure*}

\begin{figure*}
  \begin{center}
	\includegraphics{Ex4way}
  \end{center}
\caption{Comparison of meshing times using exhaustive search with using a hybrid approach where only the first layer of octants uses exhaustive search and the rest use the 4-way search to construct the lookup tables. The test was performed using a Gaussian distribution of 1 million octants per processor. It can be seen that the 4-way search is faster than the exhaustive search and scales upto 4096 processors.}  
\label{fig:ex4way}
\end{figure*}

In the second set of experiments we test the isogranular scalability
of our code. Again, we consider two point distributions, a uniform one and a
Gaussian. The size of the input points, the corresponding linear and
balanced octrees, the number of vertices, and the runtimes for
the two distributions are reported in Figures
\ref{fig:SCisoG} and \ref{fig:SCisoRG}. All the runs took
place on a Cray XT3 MPP system 
equipped with 2068 compute nodes (two 2.6 GHz AMD Opteron and 2 GBytes
of RAM per node) at the Pittsburgh Supercomputing Center. 
We observe excellent scalability for the construction and balancing of octrees,
meshing and the matrix-vector multiplication operation. 
For example, in Figure \ref{fig:SCisoG} we
observe the expected complexity in the construction and balancing of
the octree (there is a slight growth due to the logarithmic factor in
the complexity estimate) and we observe a roughly size-independent
behavior for the matrix-vector multiplication.The results are even better 
for the uniform distribution of points in Figure \ref{fig:SCisoRG}, where the time for 
5 matrix-vector multiplications remains nearly constant at approximately 20 seconds. 

Finally, we compare the meshing time for the two search strategies presented in Section \ref{sec:buildLut}.
The improvement in meshing time as a result of using the 4-way search is shown in Figure \ref{fig:ex4way}, for the Gaussian distribution. As can be seen, there is a significant improvement in meshing time at all processor counts. 

%}}}

\subsection {Octree-based MI}
\label{sec:OMIresults}

In this section we describe experiments that were carried out to test the effectiveness of octree-based MI in the rigid registration of inter-modality images. We first describe the similarity profiles when an artificial transformation is introduced between two registered images. We compared the octree-based method with uniform sampling based estimation of mutual information. The first experiment was performed using simulated MR datasets obtained from the BrainWeb database \cite{brainweb}. The second experiment was performed with  13 CT datasets with corresponding SPECT images. These images were all acquired using a Siemens Symbia\texttrademark ~T, a TruePoint SPECT-CT system and are assumed self registered.
%\footnote[1]{The alignment was visually verified.}.
\begin{figure}[tbp]
	\centering
		\includegraphics[angle=90, height=0.3\textwidth, width=0.9\textwidth]{images/bw_plot}
	\caption{Comparison of the mutual information computed via uniform 
	sampling (dotted lines) and using the proposed octree-based sampling (solid lines), on BrainWeb datasets. The plots shown are for a comparison between a T1-weighted (T1) image and a proton density (PD) image with 9\% noise and 40\% intensity non-uniformity.}
	\label{fig:bw_plot}
\vspace{-2mm}
\end{figure}
We analyzed the mutual information profiles while varying the transformation. The transformation parameters were varied one at a time, and the similarity profiles were plotted. The plots for translation along the $x$-axis, and for rotation about the $x$ and $y$ axes are shown in Figures \ref{fig:bw_plot} and \ref{fig:spect_plot}, for T1-PD MR images and CT-SPECT images, respectively\footnote{The profiles generated from using all the voxels in the image were almost identical to those obtained by uniform subsampling, and are not presented here for clarity.}. The profiles for translation and rotation along the other axes were similar. In all cases we compare the octree-based sampling with uniform sampling, where the total number of samples are similar. The octree reduced the number of samples by a factor of 8 on an average, therefore we subsampled by a factor of 2, along each direction, for the uniform sampling strategy, to have the same number of samples in both cases.
\begin{figure}[tbp]
	\centering
		\includegraphics[angle=90, height=0.34\textwidth, width=0.9\textwidth]{images/spect_plot}
	\caption{Comparison of the mutual information computed via uniform sampling (dotted lines) and using the proposed octree-based sampling (solid lines), with CT-SPECT datasets. The plots shown are for a comparison between a CT cardiac image ($512\times512\times25$) and a SPECT image ($128\times128\times128$).}
	\label{fig:spect_plot}
\vspace{-3mm}
\end{figure}
%\vspace{-5mm}
As can be seen from Figure \ref{fig:bw_plot}, both methods perform equally well on the BrainWeb datasets. Both sampling techniques have smooth curves with sharp peaks and very good capture ranges. However, when we look at the results from the CT-SPECT comparison, shown in Figure \ref{fig:spect_plot}, we observe that the octree-based sampling performs much better. Although, both approaches have good profiles subject to translation, for the profiles subject to rotation, the uniform sampling approach exhibits a weak maxima at the optimal value with a very small capture range. In contrast the octree-based approach exhibits a strong maximum at the optimal value and also has a much larger capture range. The fact that the neighboring maxima in the vicinity of the optimum are lower further implies that it is likely that a multi-resolution approach can potentially be used to increase the capture range. The uniform sampling approach will in most cases converge to the wrong result since the neighboring maxima are much larger in that case.
\begin{table}[tbp]
\begin{scriptsize}
	\begin{center}
		\begin{tabular}{|l|c|c|c|c|c|c|c|}
		\hline
		&\multicolumn{3}{c|}{Uniform sampling} & \multicolumn{3}{c|}{Octree-based} \\
		\cline{2-7}
		Dataset & Success & Trans. error (mm) & Rot. error (deg)& Success & Trans. error (mm) & Rot. error (deg)\\
		\hline
		T1 - T2 &  82.4\% & $0.48\pm 0.63$ & $0.17\pm 0.24$ & 86.1\% & $0.53\pm 0.59$ & $0.21\pm 0.19$  \\
		T1 - PD &  79.7\% & $0.57\pm 0.66$ & $0.2\pm 0.33$  & 81.3\% & $0.59\pm 0.62$ & $0.22\pm 0.23$ \\
		CT - SPECT  & 31.1\% & $0.73\pm 0.69$ & $0.23\pm0.28$ & 68.5\% & $0.64\pm 0.57$ & $0.21\pm 0.31$ \\
		\hline
		\end{tabular}
	\end{center}
	\end{scriptsize}
	\caption{Means and standard deviations of the registration errors for the different test cases.}
	\label{tab:results}
\vspace{-5mm}
\end{table}
%(an algorithm would have to be initialized very close to the central local maximum in order to converge to it)

Registration was performed on a number of datasets to quantitatively assess the performance of the octree-based mutual information within the registration framework. We selected a T1-weighted image with no noise and uniform intensity as the template image. T2-weighted and Proton Density (PD) images with varying levels of noise ($0-9\%$) and intensity non-uniformity ($0-40\%$) were registered to the template image, with a pseudo-random initial transform applied. The random transform was selected such that the initial translation was at most half the size of the image (to ensure overlap) and the rotational components were less than $60^\circ$. The same set of pseudo-random transformations were used for both methods. The registration was considered successful if the final error was less than 2mm for the translational parameters and less than $2^\circ$ for the rotational parameters. Similar experiments were performed for the CT-SPECT dataset and are summarized in Table \ref{tab:results}. The error in the estimation of the translation and rotation parameters is calculated using only the cases in which a successful registration was performed. We can see from Table \ref{tab:results}, that the octree-based sampling performs slightly better than uniform sampling in case of T1-T2 and T1-PD registration. We would like to emphasize that the success rate for the CT-SPECT registration is much better using the octree-based sampling as compared to uniform sampling, owing mainly to the broader capture range of the octree-based method. The average time to perform the registration was 13 seconds using the octree-based approach, 14 seconds for uniformly sampled approach and 85 seconds when using all voxels. All experiments were performed on an Intel Xeon 2.8GHz with 2GB of RAM. The time for the octree-based method includes the time to compute the octree.

%{{{ conclusions
\section{Conclusions and future work}
\label{sec:conclude}

We have presented a set of algorithms for the parallel construction,
balancing, and meshing of linear octrees. Our mesh data structure is interfaced
with PETSc \cite{petsc-web-page}, thus allowing us to use its linear and non-linear
solvers. Our target applications include elliptic, parabolic, and hyperbolic partial
differential equations. We presented results that verify the overall scalability of
our code. The overall meshing time is in the order of one minute for
problems with up to four billion elements. Thus, our scheme
enables efficient execution of applications which require frequent
remeshing.

We need to consider the following factors to improve the performance
of the proposed algorithms. In order to minimize communication costs,
it is desirable to have as large coarse blocks as possible since the
communication cost is proportional to the area of the inter-processor
boundaries. However, too coarse blocks will increase the work for the
local block balancing stage (Section \ref{sec:conBal}). If additional
local splits are introduced, then the intra-block boundaries increase
causing the work load for the first ripple balance to increase. The
local balancing step of the algorithm can be made more efficient by
performing the local balancing recursively by estimating the correct
size of the block that can be balanced by the search-free
approach. Such an approach should be based on low-level architecture
details, like the cache size. 

There are two important extensions: multigrid schemes, and
higher-order discretizations. For the former, restriction and
prolongation operators need to be designed, along with refinement and
coarsening schemes. Higher-order schemes will require additional
bookkeeping and longer lookup tables as the inter-element connectivity
will increase.
%}}}





