The serial results in Table~\ref{serial-times} suggest that GPU acceleration of the $\Sigma$-calculation step would provide the most speedup for the entire problem. Although acceleration of the weight-correction step is also possible, it will provide less benefit and require tradeoffs between speed and accuracy.

To use CUDA acceleration the author moved the retrace steps from the Fortran source into a C module. The author then moved pieces of the retrace steps into a CUDA kernel.



\subsection{Batch Size}

Preliminary versions of the code used the GPU to calculate $\Sigma$ for individual segments. Although the results are accurate, the latency of the GPU calls resulted in an astronomical runtime. To amortize the GPU latency over a very large number of segments, the team developed the batch-based logging system. The $\Sigma$ calculations can then be recast as matrix operations, i.e.,
\eq{
    \left[ \Sigma \right]_{nseg \times nmodel} =
    \left[ \sigma \right]_{nmat \times nseg}^T \times
    \left[ \rho \right]_{nmat \times nmodel}
}
or equivalently
\eq{ \label{sig-kernel}
    \Sigma(iseg, imodel) =
    \sum_{imat}
        \sigma(imat, iseg) \times
        \rho(imat, imodel)
}

The kernel operation given in Eq.~\ref{sig-kernel} is naively split among $nseg$ blocks of $nmodel$ threads. Timings with various batch sizes are listed in Table~\ref{batch-vs-time}. Using $nbatch \ge 10^5$ appears to exceed the allowable block count with the maximum number of blocks equal to the maximum number of segments in a batch (199645).

\inserttable{batch-vs-time}{CUDA run times in seconds for various $nbatch$; $NPS=10^5$, $nmat=nmodel=100$. Trials with $nbatch \ge 10^5$ yielded inaccurate results.}{lcccccc}{
    $nbatch$ & max segments/batch & Trial 1 & Trial 2 & Trial 3 \\
    \hline
    1 & 22 & 20.3 & 20.8 & 20.4 \\
    2 & 26 & 12.8 & 12.9 & 12.9 \\
    5 & 32 & 9.0 & 8.9 & 9.0 \\
    10 & 40 & 8.0 & 7.9 & 8.0 \\
    20 & 66 & 7.5 & 7.5 & 7.6 \\
    50 & 137 & 7.3 & 7.3 & 7.4 \\
    100 & 239 & 7.3 & 7.2 & 7.2 \\
    1000 & 2071 & 7.2 & 7.1 & 7.2 \\
    $10^4$ & 20090 & 7.2 & 7.2 & 7.3 \\
    \hline
    $10^5$ & 199645 & 5.5 & 5.1 & 5.1 \\
    \hline
}

A time breakdown of the CUDA calculation is listed in Table~\ref{cuda-breakdown}. The results suggest that the CUDA portion of the code is not fully optimized, since the ``Calculate retrace $\Sigma$'' step is only $1.3 \times$ faster than the serial equivalent. Since there are only $nmodel = 100$ threads per block in these results, larger blocks will be tested in the next section.

\inserttable{cuda-breakdown}{Estimated breakdown of a 7.2-second CUDA calculation with $NPS = 10^5$, $nmat = nmodel = 100$, and $nbatch = 1000$.}{lcccccccc}{
    Step & Time [s] \\
    \hline
    Trace & 0.8 \\
    Calculate retrace $\Sigma$ & 4.3 \\
    Calculate retrace wgt & 0.6 \\
    Logging, etc. & 1.5 \\
    \hline
}


\subsection{Thread and Block Size}

With $nbatch = 10^4$, the $\Sigma$ calculation typically uses about 2 million CUDA threads. On the Fermi architecture, a CUDA core can accept 1-dimensional blocks of 32 to 1024 threads. The number of threads per block should be large to optimally use each multiprocessor, but the number of blocks should also be large to use all of the multiprocessors.

\inserttable{threads-vs-time}{CUDA run times in seconds for various threads per batch; $NPS=10^5$, $nmat=nmodel=100$, and $nbatch = 10^4$.}{lcccccc}{
    threads/batch & Trial 1 & Trial 2 & Trial 3 \\
    \hline
    32 & 7.2 & 7.3 & 7.2 \\
    64 & 7.0 & 7.0 & 7.1 \\
    128 & 7.1 & 7.1 & 7.0 \\
    256 & 7.1 & 7.1 & 7.1 \\
    512 & 7.2 & 7.2 & 7.3 \\
    1024 & 7.3 & 7.2 & 7.3 \\
    \hline
}

The runtime results in Table \ref{threads-vs-time} show that the runtime is not strongly sensitive to the number of threads per block. Using coalescing to increase bandwidth may be a more important optimization.



\subsection{Coalesced Memory Access and GPU Shared Memory}

The Fermi architecture optimizes global memory accesses in which each thread in a half-warp simultaneously requests either (a) the same piece of data, i.e., a broadcast, or (b) contiguous words. Strided data requests and non-broadcast requests for the same piece of data are much slower.

The $\sigma$ and $\rho$ matrices of Eq.~\ref{sig-kernel} are optimized for CPU caching. Two steps were taken to optimize them for GPU memory access. First, the number of threads per block were fixed at $nmat$ so that $\sigma$ could be broadcast to all threads in the block. Second, the $\rho$ matrix was transposed so that the threads' simultaneous requests for a fixed $imat$ would no longer be strided. Theses changes dramatically reduce the time required for the $\Sigma$ step, as shown by the reduced total runtimes in Table~\ref{coalesce-times}.

\inserttable{coalesce-times}{CUDA run times in seconds with and without memory optimization; $NPS=10^5$, $nmat=nmodel=100$, and $nbatch = 10^4$.}{lcccccc}{
    optimized? & threads/batch & Trial 1 & Trial 2 & Trial 3 \\
    \hline
    no & 512 & 7.2 & 7.2 & 7.3 \\
    yes & 100 & 3.9 & 3.9 & 3.8 \\
    \hline
}



\subsection{Accelerating Other Retrace Steps}

The weight correction calculation and incrementing of retrace tallies can also be accelerated. The exponential function in the weight multiplier calculation can be calculated with full precision (slow) or a hardware call with limited precision (fast). Compare runtimes in Table~\ref{full-accel}. As expected, the results using the hardware function were not exact (as were all previous trials).

\inserttable{full-accel}{CUDA run times in seconds with and without full precision exponential; $NPS=10^5$, $nmat=nmodel=100$, and $nbatch = 10^4$.}{l|cccccc}{
    command & $\exp$ & $\_\_\mathrm{expf}$ \\
    \hline
    Trial 1 & 2.8 & 2.8 \\
    Trial 2 & 2.9 & 2.8 \\
    Trial 3 & 2.9 & 2.8 \\
    \hline
    Checksum 1 & 2.264873(7035819185)E+06 & 2.264873(6700113043)E+06 \\
    Checksum 2 & 2.562296(3911091546)E+06 & 2.562296(3578577880)E+06 \\
    Checksum 3 & 5.172852(4632014949)E+06 & 5.172852(4042484993)E+06 \\
    \hline
}



\subsection{Final Comparison---Serial vs CUDA}

Using a flag to enable/disable retracing commands, long timing studies were run to estimate the CUDA speedup of the retracing step. The trials were performed on a NVIDIA GTX 650 GPU. The timing results are listed in Table~\ref{final-cuda-compare}.

\inserttable{final-cuda-compare}{Run times in seconds with and without retracing, with and without CUDA; $NPS=10^6$, $nmat=nmodel=100$, and $nbatch = 10^4$.}{lcccccc}{
    retracing? & serial & CUDA \\
    \hline
    no & 1.423 & 7.948 \\
    yes & 80.888 & 27.781 \\
    retrace time & 79.465 & 19.833 \\
    \hline
    retrace speedup & -- & $4.006 \times$ \\
    overall speedup & -- & $2.911 \times$ \\
    \hline
}



