\section{Introduction}
Advances in processor technology enabled financial traders to employ
various degrees of mathematically complex algorithms in real time 
trading as well as backtesting. Parallel programming has gained 
in popularity as multicore machines began to dominate industry. Advances in GPGPU have resulted in a technology that is high performance and relatively easy to use. This thesis focuses on using GPGPU technologies on financial algorithms.  

Top end multi-core CPU machines for the latest processors such as Intel Xeon
allow the user to control up to 16 threads using various programming 
languages such as OPENMP.  By contrast, a single GPGPU machine allows the user
to control up to 65k threads.  As a result of the computational 
capability of GPGPU machines and the development of the CUDA C language,
programmers are beginning to move the computationally extensive and parallel 
regions of their code to the GPGPUs.  

This thesis will present two computationally expensive algorithms
that are migrated from a CPU environment to a GPU environment and analyze the 
challenges and results of performing calculations in the GPU.  

Chapter 2 will present a review of the Log-Laplace distribution (Kozuboski and Podgorski, 2003). Give an overview of the maximum likelihood 
estimators for this distribution, and present an algorthm for fitting data to 
this algorithm.

Chapter 3 will present a review of a propreitary distribution provided by Aleks Chechkin of BNP. At the request of Aleks Chechkin, this chapter is not available for public view. Those wishing to view this chapter must contact professor Xinming Huang.

Chapter 4 will begin with a review of CUDA C, then discuss the challenges and techniques  of coding the algorithms in C++ and CUDA C. Various acceleration techniques in CUDA will be presented.
    
Chapter 5 will show how to turn the sequential algorithm into a parallel CUDA algorithm. It will present the results of CUDA C against the results of C++, from a performance perspective.

Chapter 6 will present an in-depth look at the various techniques used to obtain and enhance the acceleration achieved in Chapter 5. There are discussions about why certain methods of acceleration worked better than others. This chapter will also show that having multiple GPGPU devices in the same machine leads to greater acceleration by combining CUDA and OPENMP.

Chapter 7 will present methods to furthur accelerate the algorithm. The methods are based on assumptions about the nature of the input data. The results and limitations of these methods are presented. 

Chapter 8 presents the conclusion of this research.