\section{Parallel Computing in CUDA}
Sections 2 and 3 described algorithms for finding 
parameters of the Log Laplace and the Assymetric
Exponential Power distributions. We follow these methods
and write sequential C++ code. This sequential code
is used as the baseline for performance and correctness
in producing massively parallel CUDA C code.
\subsection{Motivation for Acceleration}
The motivation in minimizing the runtime of the algorithms is twofold:
backtesting and real time high frequency trading.  For example, in working 
with the AEPD; if we are performing one fit of observations 
$x_1, x_2, \cdots, x_{500}$  and we assert 80 unique values of 
$\alpha$.  The CPU running time over this window of 500 points is 1.2 seconds.

For backtesting purposes, a few months of high frequency data numbers in the
millions. 1.2 million seconds is equivalent to 333 hours. Backtesting quickly becomes slow and consumes too many resources for the trader.  

In high frequency trading, data is arriving every second and trading algorithms
must make trading decisions as soon as possible.  While the AEPD fitting 
algorithm is only one part of a complex system, it is a part that is taking
too much time to run at 1.2 seconds.
\subsection{Sequential Programming in C++}
Here we show how we turned the equations in the AEPD distribution into 
a sequential programing algorithm, shown in pseudocode. Recall from section 3 that maximum likelihood estimation for AEPD yields equations for $\hat{\sigma}$ and $\hat{\kappa}$:
\begin{equation} \label{eq:MLEkappa2}
\hat{\kappa} = 
\left[
\frac{X_{\theta}^-}{X_{\theta}^+}
\right]
^{\frac{1}{2(\alpha+1)}}
\end{equation}

\begin{equation} \label{eq:MLEsigma3}
\hat{\sigma} =
\big[
\alpha(X_{\theta}^+ X_{\theta}^-)^{\frac{\alpha}{2(\alpha+1)}} 
\big(
(X_{\theta}^+)^{\frac{1}{\alpha+1}} +
(X_{\theta}^-)^{\frac{1}{\alpha+1}}
\big)
\big]
^{\frac{1}{\alpha}}
\end{equation}

The estimators $\hat{\alpha}$ and $\hat{\theta}$ are found by maximizing the log-likelihood:
\begin{equation} \label{eq:AEPDlogLike}
\emph{L}^* = n(\log\frac{\alpha}{\Gamma(\frac{1}{\alpha})}
+\log\frac{\kappa}{1+\kappa^2} - \log\sigma)
-\frac{\kappa^\alpha}{\sigma^\alpha}X_{\theta}^+
-\frac{X_{\theta}^-}{\kappa^\alpha\sigma^\alpha}
\end{equation}

Remember that the possible values of $\hat{\theta}$ are the values of the time series $x_1, x_2, \cdots, x_{500}$ and the values of alpha are an iteration over 80 values chosen by the modelers intuition.  

The pseudocode of the algorithm for find the MLE estimators is then:
\linespread{1}
\begin{verbatim}
sort (x[0], x[1], x[500]) in increasing order;
for (i from 1-500){
  for (j from 1-80){
    for (k from 1-500){
      if (i < k)
\end{verbatim}
\hspace{1in}
$X_{\theta}^+ \stackrel{+}{=} (x[k]-x[i])^{\alpha[j]}$;
\begin{verbatim}
      else 
\end{verbatim}
\hspace{1in}
        $X_{\theta}^- \stackrel{+}{=} (x[i]-x[k])^{\alpha[j]}$;
\begin{verbatim}
    }
\end{verbatim}

\begin{alltt}
    Calculate \(\kappa, \sigma\);
    Calculate log-likelihood;
\end{alltt}
\begin{verbatim}
  }
}
Find the maximum log-likelihood and the associated parameters;
\end{verbatim}
\linespread{2}
\subsection{Parallel Programming in CUDA}
Analyzing the pseudocode, there are three for loops
of various lengths; the calculation of $X_{\theta}^-$ and $X_{\theta}^+$ within these for loops is determined to be the main bottleneck of the algorithm in C++ and in CUDA, measured by timing the algorithm. A quick analysis of program runtime shows that the calculations within the three for loops takes 99 percent of total run time in CUDA.  The main task for the developer is to therefore find a accelerate the three for loops. 

The key to writing the CUDA kernel for this algorithm is to take advantage of the multiple threads within the GPU device.  Looking at the algorithm described by the pseudocode, the calculations of $X_{\theta}^-$ and $X_{\theta}^+$ depend on the indexes of the three blocks and the input data $x_1, x_2, \cdots, x_{500}$.  

This suggests a particular scheme in writing the CUDA kernel.  From the CUDA programming review, a CUDA kernel is launched as a 2 dimensional array of blocks, which contain a number of threads that could be arranged 2 dimensionally.  With this in mind, the initial developed kernel launches 500 x 80 blocks, where 500 isthe first for loop over i and 80 is the second for loop over j.  Each one of these blocks contains 500 threads, covering the third for loop over k.  Note that because of the nature of CUDA programming, the actual number of threads launched is $2^n = 512$.  Figure ~\ref{fig:codeToThreads}  shows the conversion of pseudocode to GPU kernel launch configuration.  

%-------- Figure -----------------------------------
\begin{figure}[h!]
\epsfig{file=figures/pseudoToCUDA.eps}
\caption{pseudo code for loops to CUDA blocks and threads}
\label{fig:codeToThreads}
\end{figure}

%------- End Figure---------------------------------------

Analyzing figure ~\ref{fig:codeToThreads}, block (i,j) of the CUDA code contains 500 threads (actually 512). Thread k of block (i,j) (k $\leq$ 500) calculates the value of 
$\left|x[k]-x[i]\right|^{\alpha[j]}$. All 500 threads of the block are then summed to find $X_{\theta}^+ and X_{\theta}^-$.  These values are written to global memory to be used by a second kernel to find the MLE parameter values by maximizing the log-likelihood value. 

\subsection{Results}
Using the GPGPU algorithm described, the results for the time of one fit is 40ms.  This is a 30 time speed-up from the CPU only result of 1.2 seconds.  The next section will describe different methods of furthing accelerating this result.   

 
   
