
% \documentclass{article}
 %\title{Language syntax}

 %\setlength{\oddsidemargin}{-0.2in}
 %\setlength{\evensidemargin}{-0.2in}
 %\setlength{\textwidth}{7in}
 %\setlength{\textheight}{9in}
	
 %\begin{document}
 %\maketitle

\section{Introduction}
The Matpix language provides an execution model that facilitates shared computation between a CPU and  a GPU. GPUs provide highly accelerated arithmetic computations with limited control flow features for imperative programming. The GPU computation paradigm is similar to functional programming  in the sense that data is essentially immutable during computation. Matpix, however, is not a functional language, so executing imperative computations in Matpix requires shared processing between the GPU and CPU. At the core of the Matpix compiler is a facility to divide program execution into control flow instructions that execute on a CPU, and arithmetic instructions that execute on a GPU.   

\section{GPGPU Computation Model}     
Modern GPU architecture evolved from the highly parallel graphics computations that are needed for raster based graphics rendering. Current GPUs have multiple processing units that execute a reduced instruction set in a highly parallel manner. In particular, GPUs have very poor support for branching operations. They are optimized for vector arithmetic computations where the data and computation is assumed to be completely independent. These optimizations provide highly accelerated arithmetic and memory operations with limited computational functionality.     

\subsection{Mapping Computations to the GPU}
Not all computations can be mapped to the GPU. Computations that are most suitable for the GPU are "kernel" based operations that perform the same operation on a large array of data. These kernel operations are performed once per element of the array. Data dependent branching on the GPU is extremely expensive, and is typically implemented by following each execution branch simultaneously. 

\subsection{Computation}
The last major update to the OpenGL standard included an interface to create and run programs that get executed directly by the processing units on a GPU. The interface includes a standard C-like language called "Opengl Shading Language" (GLSL) as well as standards to compile and link programs written in GLSL to instructions that execute on the GPU. GLSL programs come in to flavors, fragment and vertex. For reasons I do not go into here, all Matpix GPU computations are implemented as fragment programs.

The GPU computation can be thought of as a program that loops through every element in an array and applies a kernel to that element. Probably the biggest limitation in the GPU computation model lays in the fact that there is very limited control given to the programmer over the outer loop in this computation. For instance, there is no way to tell the GPU to stop executing the loop once it has begun execution. In addition, the extremely expensive final stage of a GPU computation where output pixel values are written to the frame buffer cannot be cancelled once this main loop is entered. This means that if great care is not taken, much of the computational load of the GPU will be entirely wasted. So although GLSL does include common control flow statements such as "for" loops and "if-then" statements,  these control flow statements are not as useful as they might first appear to be. In an attempt to maximize the computation load on the GPU, Matpix implements all control flow operations on the CPU.       

\subsection{Memory}
Modern GPUs provide extremely fast memory in quantities typically a factor of 2-4 less than CPUs. With close to 1GB VRAM available on current GPUs, memory is plentiful enough for most applications. However, inherent to the GPU optimization model is limited memory operations. GPUs do not provide random-access memory like most CPUs do. Instead, GPUs provide Read-Only memory operations during execution, and Write-only operations after the computation is completed. 

\subsubsection{Read-only Memory : Textures}
In graphics, textures are used to add a high level of visual complexity to models with relatively simple geometry. Textures are loaded onto the GPU by transferring 2D image data from CPU RAM to the GPU, where they are then stored in local texture memory. Texture read operations are allowed during GPU fragment program execution, where the value of an output pixel is being calculated. For GPGPU calculations, textures are treated as arbitrary memory regions and texture read operations are treated as the memory read operations that allow access to this region of memory. 

\subsubsection{Write-only Memory : Frame Buffer}
After a fragment program is executed, the calculated value of the output pixel written to the a region of texture memory known as the "Frame Buffer". During typical graphics applications, after fragment program execution is finished, contents of the frame buffer are sent directly to the graphics port which converts the pixel data to a signal that can be accepted by a computer monitor such as an LCD screen. For GPGU computations, the contents of the frame buffer contain the output of the fragment program execution, and the results must be sent back to the CPU for output or further processing. It is also possible to copy the contents of the frame buffer into a region of texture memory (so that it can be read during fragment program execution).  

\subsection{OpenGL Requirements}
There are two important GPU features that are required for GPGPU computation. These features are provided on modern OpenGL driver implementations for all dedicated graphics cards that were produced in the last 3-5 years. The first is the "shader program object" extension which provides the interface for compiling, linking and executing fragment programs. The second, slightly less ubiquitous feature is the "texture float" extension which provides floating point textures. Typically, textures are just 8-bits per color, and are treated in an analogous manner to unsigned characters in 'C'. A special OpenGL extension must be enabled to allow floating point texture data to be stored on the GPU. Currently, all floating point GPU data is 32-bit. There is no double precision floating point data representation  or computation on even the most state of the art GPUs.   
     
\subsection{Software Mode}
GPUs typically come in two flavors: "dedicated" graphics cards, and "integrated" graphics chips. Dedicated graphics cards are connected to the motherboard on slots that allow the GPU to communicate with the CPU via an external BUS such as PCI, AGP, or PCI-Express. Integrated graphics chips are typically found on consumer level laptops, and often have a dramatically reduced feature set in comparison to their dedicated counterparts. For this reason, integrated graphics cards typically do not provide floating point textures, which means they cannot be used to perform GPGPU computations. However, OpenGL drivers on most operating systems provide a mechanism to turn off all GPU hardware acceleration and emulate GPU computations in software. These software OpenGL implementations typically do provide floating point textures and therefore can be used for GPGPU computations. The Matpix compiler has the facility to create executable code that can either run in "hardware accelerated" mode or "software" mode. This was necessary to implement because several members of the group needed to be able to work on their laptops with non-GPGPU compliant integrated graphics chips.       

\section{CPU Computation}
As already mentioned, Matpix redirects all control flow operations to be performed on the CPU. Matpix source files are compiled into C++ source files that are then compiled with a C++ compiler and linked against the Matpix "standard" library. All control flow statements in Matpix source files are implemented as analogous statements in C++ syntax when they are compiled.   

\section{Communication Between the GPU and CPU}
Because the GPU sits on a dedicated BUS with a limited bottleneck, data transfer between the CPU and GPU can be a huge limitation in GPU based computation. Indeed, this is often "the" limiting factor for GPU computations. Matpix attempts to resolve this limitation by minimizing data transfers as much as possible. Data is transferred from the CPU to GPU only when scalar constants are assigned to array elements. Data is transferred from the GPU to the CPU only when the output is required for print out (by the "print" function). The result of all computations are kept on the GPU by copying the contents of the frame buffer to texture memory so that the result of fragment program computation are available for the next operation.  

\section{The Complete System}
The Matpix compiler consists of a compilation environment and a run-time environment. The compiler consist of a front-end which processes Matpix source files, and a back end which processes compiled Matpix source files into an executable. The compiler front-end consists of the familiar lexer, parser, and tree walker. Together, these components perform syntactic and static semantic error checking on Matpix source programs, and produces a C++ source file output for error-free programs. The back-end consists of the Matpix C++ standard library, which manages GPU/CPU communication via OpenGL, and performs dynamic semantic analysis on matrix computations. The Matpix C++ standard library also produces fragment program source files which are sent to the GPU for compilation and linking. The runtime environment consist of the familiar C++ run-time, and the OpenGL run-time, which communicates to the GPU via the AGP/PCI-Express BUS. The OpenGL run-time is responsible for initializing and activating fragment programs, and delegating data transfer between the CPU and GPU. 

\begin{figure}[htbp] %  figure placement: here, top, bottom, or page
   \centering
   \includegraphics[width=6in]{figures/blockdiagram.png} 
   \caption{Matpix compiler block diagram}
   \label{fig:block}
\end{figure}


% \end{document}

