\section{Acceleration Using Properties of Data}

The algorithm shown before works for any set of input data. There are ways to accelerate the algorithm furthur if we know certain properties about the data. The input data in this case is a financial time series of high frequency data ticks.  These ticks are updated every second, so we do not expect a wide range of swings within the data. Until a few months ago, the data was defined to $10^{-4}$, in that case, there were often few unique data points withing the 500 point movin window of data. Knowing these two facts allows the developer to make some fundamental changes in the algorithm in order to achieve acceleration.

\subsection{Lookup Table Implementation}
Recall that the bottleneck of the algorithm calculates $X_{\theta}^+ = \sum_{i}(x[k]-x[i])^{\alpha[j]}$.  Look at the term $(x[k]-x[i])^{\alpha[j]}$: x[k] and x[i] both come from the data set. Let $x_1, x_2, \cdots x_{500}$ be the sorted version of the data such that $x_1 \leq x_2 \leq \cdots \leq x_{500}$. If $x_{500}-x{1} < maxdiff$ over all sets of data x[] and the values of x[] are only represented out to the $10^{-4}$, then we can build a lookup table for the algorithm.

The lookup table is built to hold the values of $maxdiff^{\alpha[j]}$.  So if maxdiff is 0.0010, the lookup table will hold 10 values for each value of $\alpha$, so 800 values total.  

The algorithm savings come from not doing the power calculation, which requires the arithmetic logic unit. Using the lookup table is simply an access to memory.
\textbf{Results and Explanation}
In C++, the single thread version of the lookup table algorithm yields a run time of 0.12 seconds, a speed up factor of 10. In CUDA, the lookup table algorithm yields a runtime of 0.14 ms, a speed up factor of 2.

The use of the lookup table actually dramatically reduced the speed up factor of GPGPU vs. CPU. The explanation for this lies in the nature of CUDA programming vs CPU programming.  As previously mentioned, using the lookup table eliminates the need for arithmetic logic units (ALU). More specifically, lookup tables replace 2 million ALU operations with 2 million accesses to memory.  

GPGPUs are designed to perform in computationally intensive environments. The FERMI architecture of the GPGPU devices offer a fully pipelined ALU unit for each stream processor. In other words, GPGPUs offer many ALU units on chip, many more than the CPU. When we perform the lookup table, each GPGPU thread has to access global memory for the look up table.  There is no way to move the lookup table into shared memory because each shared memory space is limited in size.  

When the GPGPU implements a lookup table vs. the ALU operation, it is going froma strength to a weakness. The time is still sped up by a factor of 2 because there are not 2 million ALU units on the device and some queueing is required, but the speed up is certainly not as dramatic as the CPU only side. 

\subsection{Bins for Unique Data Points}