\section{Conclusion}
\label{conclusion}

In this project, we examined the effect of using GPUs for computations in Galois fields.  We developed three libraries: an naive implementation that does all calculations directly, an implementation based on Plank's FGAL~\cite{PlankPFGA} that uses the GPU and table lookups, and an implementation that uses the GPU and does all calculations directly (without tables).  Our original hypothesis was that using the GPU for calculations would result in a significant speedup. This hypothesis was incorrect for several cases.  Even when speedup did not occur, it was not as significant as we had originally hypothesized.  However, we were able to determine that the bottleneck when using the GPU is memory transfer for this application. 

It is worth noting that the performance of code written for a GPU is extremely dependent on the both the compiler and the hardware used.  For the tests run, the hardware used is not state of the art, so these results are not indicative of general performance of our libraries with respect to the non-GPU libraries. 

While our results are not overwhelmingly positive, there is evidence that the use of GPUs in erasure coding computation can lead to performance improvements. Further, others have also had limited success in implementing Galois field arithmetic on parallel architectures such as GPUs~\cite{optGFA}.

There are several ways this project could be extended.  It would be interesting to explore the performance on state of the art GPUs. We did not explore the use of texture memory, a level in the GPU's memory hierarchy, which can lead to performance improvements in GPU-based code. It would be interesting to explore load balancing between the computations on the GPU and CPU.  We did implement a naive load balancing approach that was not successful, but load balancing is a complex problem. It could be advantageous to explore this further. A final improvement for the project that we did not pursue was simultaneously increasing the number of threads and the amount of work per thread.  We implemented these separately but did not observe their combined effect. 
