\section{Acceleration Using Properties of Data}

The algorithm shown before works for any set of input data. There are ways to accelerate the algorithm furthur if we know certain properties about the data. The input data in this case is a financial time series of high frequency data ticks.  These ticks are updated every second, so we do not expect a wide range of swings within the data. Until a few months ago, the data was defined to $10^{-4}$, in that case, there were often few unique data points withing the 500 point movin window of data. Knowing these two facts allows the developer to make some fundamental changes in the algorithm in order to achieve acceleration.

\subsection {Circular Buffer}
The C++ data structures used to implement the moving window is a circular buffer which updates as the moving window updates. Every time a new data value arrives at the end of the buffer, the data value at the beginning of the buffer is removed. Observe that the output of the fitting algorithm remains if the input of the fitting algorithm remains the same. If the new data value that arrives is equivalent to the data value being discarded, simply keep the same parameters as the last moving window. These cases occur often enough with $10^{-4}$ data to be significant.

\subsection{Lookup Table Implementation}
Recall that the bottleneck of the algorithm calculates $X_{\theta}^+ = \sum_{i}(x[k]-x[i])^{\alpha[j]}$.  Look at the term $(x[k]-x[i])^{\alpha[j]}$: x[k] and x[i] both come from the data set. Let $x_{(1)}, x_{(2)}, \cdots x_{(500)}$ be the sorted version of the data such that $x_{(1)} \leq x_{(2)} \leq \cdots \leq x_{(500)}$. If $x_{(500)}-x_{(1)} < maxdiff$ over all sets of data x[] and the values of x[] are only represented out to the $10^{-4}$, then we can build a lookup table for the algorithm.

The lookup table is built to hold the values of $maxdiff^{\alpha[j]}$.  So if maxdiff is 0.0010, the lookup table will hold 10 values for each value of $\alpha$, so 800 values total.  

The algorithm savings come from not doing the power calculation, which requires the arithmetic logic unit. Using the lookup table is simply an access to memory.

\textbf{Results:}
In C++, the single thread version of the lookup table algorithm yields a run time of 0.12 seconds, a speed up factor of 10 from the non-lookup version of C++. In CUDA, the lookup table algorithm yields a runtime of 0.14 ms, a speed up factor of 2 from the non-lookup version of CUDA.

The use of the lookup table dramatically reduced the speed up factor of GPGPU vs. CPU. The explanation for this lies in the nature of CUDA programming vs CPU programming.  As previously mentioned, using the lookup table eliminates the need for arithmetic logic units (ALU). More specifically, lookup tables replace 2 million ALU operations with 2 million accesses to memory.  

GPGPUs are designed to perform in computationally intensive environments. The FERMI architecture of the GPGPU devices offer a fully pipelined ALU unit for each stream processor. In other words, GPGPUs offer many ALU units on chip, many more than the CPU. When we perform the lookup table, each GPGPU thread has to access global memory for the look up table.  There is no way to move the lookup table into shared memory because each shared memory space is limited in size.  

When the GPGPU implements a lookup table vs. the ALU operation, it is going from a strength to a weakness. The time is still sped up by a factor of 2 because there are not 2 million ALU units on the device and some queueing is required, but the speed up is certainly not as dramatic as the CPU only side. 

\subsection{Bins for Unique Data Points}

Let $x_1, x_2, \cdots, x_{500}$ be the input data. It can be observed that there are many data points equal to each other. It is often the case that there are less than 20 unique data points amongst the set of 500 inputs. This observation leads to two ways to further accelerate the algorithm.

Suppose there are m unique data points in the set, Let $unique[0], unique[1],$
 $\cdots,unique[m-1]$ be the value of these unique data points. Let 
$timesRepeated[0],$ $timesRepeated[1], \cdots, timesRepeated[m-1]$ be the number of times these values occured in the set $x_1, x_2, \cdots, x_{500}$ such that $\sum_{i=0}^{m-1} timesRepeated[i] = 500$.

Having built the data structures to properly update the arrays $unique$ and $timesRepeated$ every time the input window moves, we can now use a different GPGPU launch configuration to achieve acceleration. Suppose the previous launch is 500x80 blocks of 512 threads each, the new launch will be 500x80 blocks of $m_2$ threads each, where $m_2 = 2^{ceiling(log_2(m))}$.  In other words, if m = 6, launch 8 threads, m = 15, launch 16 threads, etc.

The work of the first m thread in this new kernel is to calculate $power[tid] =(unique[tid]-p1)^\alpha[j]$.  Recall from before that p1 = data[i], so there is some additional work to find p1. Next, instead of summing all threads, simply calculate the sum product: $\sum_{i=0}^{m-1}timesRepeated[i]* power[i]$. The savings comes from the fact that there are only $m_2$ threads performing the sum instead of the full 500 threads.
 
\textbf{Results:} 
The results of both the GPGPU program and the CPU program are no longer deterministic. It is easy to see that the results should vary with m, with larger values of m leading to longer algorithm running times. Results show that the average time of the program over an updating moving window is consistently between 1 and 2 ms. These times are consistent using both the lookup table method and the actual ALU calculations. 

On the CPU only side, the algorithm runs in 60 ms using a combination of lookup tables and data structures.

\subsection{Drawbacks of Using Data Structures}

We have seen the results of using data structures in algorithm acceleration. However, there are certain drawbacks as well. Foreign exchange data recently made advancements in precision, going from $10^{-4}$ to $10^{-7}$.  This result proves troublesome for both lookup tables and unique data bins. 

In lookup tables, an increase in data presision from $10^{-4}$ to $10^{-7}$ means an increase in the size of the lookup table, leading to an increase in allocated memory. Let $maxdiff = x_{max} - x_{min}$ where $x_{max}$ and $x_{min}$ are the maximum and minimum of $x_1, x_2, \cdots, x_{500}$, respectively. Then for $10^{-4}$ data precision, the lookup table requires $maxdiff \cdot 10^{4} \cdot 80$ elements where $10^{-7}$ data precision requires $maxdiff \cdot 10^{7} \cdot 80$ elements, an increase of 1000 times in memory requirement.  

The resultant increase will push the limits of CPU memory and render the lookup table method especially tedious for GPGPUs (if the GPGPU does not have enough global memory for the lookup table, accessing CPU memory from GPU is the only way to run the lookup table).

If the developer wishes to define finer values of $\alpha$ such that the iteration is greater than 80, the lookup table will increase accordingly and further stretch resources.

Similarly, increases in data precision makes the unique data bins method obsolete, there are now highly likely to be 500 unique values amongst the 500 data sets. 

As of this date, there are financial data still keeping the $10^{-4}$ precision. So the methods outlined in this chapter are still valid for certain data.

  
