\section{Optimizations}
\label{sec:optimize}

In this section we review the main classes of optimizations for increasing the execution speed of matrix multiplication.  We discuss in more detail those techniques which we attempted to implement and note which techniques we were unable to evaluate.  We obtained the ideas for these techniques from~\cite{chellappa:08} and~\cite{goto:08}

The primary goal in all of these techniques is to take advantage of the structure and behavior of the target machine's memory hierarchy in order to speed execution.  The greater the extent that data can be read out of faster higher-level cache memory, the faster the multiplication will execute.  A secondary goal of optimization is to maximize the the pipelining behavior of the processor.

\begin{center}
\begin{table}\footnotesize
\begin{tabular}{|l|l|l|}
\hline
{\bf Machine Attributes} & {\bf Franklin} & {\bf Little Bear} \\
\hline
Processor Type & Quad-Core AMD Opteron (Budapest) & Dual-Core Intel Xeon \\
\hline
Processor Speed & 2.3 GHz & 2.66 GHz \\
\hline
Number of SSE Registers & 16 & 16 \\
\hline
L1 Cache Size & 64 KB & 64 KB \\
\hline 
L2 Cache Size & 1024 KB & 4 MB\\
\hline
TLB Size & 1024 4K pages & 1024 4K pages \\
\hline
\end{tabular}
\caption{This table provides a listing of relevant machine parameters for which to optimize matrix multiply; in particular important characteristics of the memory hierarchy.}
\label{tab:machine_parameters}
\end{table}
\end{center}


\subsection{Blocking}

Blocking involves dividing the main matrix multiply operation into a series of operations on submatrices and choosing the size of these submatrices, or blocks, such that the current blocks of operation can fit into cache, in order to achieve memory locality.  In particular, matrices are blocked on three levels.  The first level of blocking attempts to fit all three matrices in the L2 cache, the second in the L1 cache, and the third into registers.  This process requires turning the three original loops of matrix multiply into a set of nine loops, in three groups of three.  The difficulty then is choosing the block size to bound the number of iterations in these loops so that data fits in the cache.  In our case, given the parameters of our target machine shown in Table~\ref{tab:machine_parameters}, we chose an L2 block size of 200 and an L1 block size of 64.  These sizes were chosen based on formulas found in~\cite{chellappa:08}; we do not reproduce them here.

\subsection{Loop Unrolling}

Loop unrolling is transforming a control loop that iterates over an index variable into a set of explicit statements in which the index of interest is incremented each line manually.  Loop unrolling is often performed by the compiler but can be more efficient when done by hand.  The constraint is that loop unrolling can only be done when the number of iterations is known beforehand, and the trade-off is that unrolling a very large loop by hand can greatly increase the number of source lines of code of your program, which can be quite tedious.  As such, we only unrolled the innermost loops of our nine nested loops {\it i.e.}, those which operate on data in the registers.  This brings us directly to the next optimization. 

\subsection{Streaming SIMD Extensions Instructions}

Many processors provide a set of special instructions that operate on specially provided 128 bit SIMD registers.  These instructions can operate on several data items in parallel and can thus speed up execution greatly if used correctly.  We incorported SSE instructions into our unrolled inner loop.

\subsection{Instruction Scheduling}

Instruction scheduling simply involves ordering instructions so that independent operations {\it i.e.}, those in which the input of one is not the output of another, are interleaved between dependent instructions.  This reduces the chance of CPU pipeline stalls.  We attempted to do source scheduling {\it i.e.} instruction scheduling at the source statement level, where possible, in particular in the unrolled inner loop that used SSE instructions.

\subsection{Buffering}

Matrices on Franklin's compute nodes are stored in column-major order.  Thus, when operating on a submatrix of a matrix that spans several columns but not all rows, the memory addresses at which the data in these submatrices are stored are unlikely to be contiguously located in memory.  Where this plays in is in the cache line size of the processor cache.  That is, when a command is issued to load the data at a memory address into the cache, what actually is loaded is a block of memory starting at that address and equal to the cache line size.  If the data that is to be accessed is located in non-contiguous strides of memory, this can result in unneccessary data being loaded into cache and occupying space that would otherwise be devoted to useful data, which will then result in cache thrashing.  As such, there can be an advantage to loading these blocks into a contiguous address range before operating upon them.  However, there is again a trade-off.  For very small or very large matrices, the overhead of copying into a second buffer will exceed the benefit obtained from having the data in a contiguous block of memory.  Given that buffering complicates control loop structure and that there is not always a clear advantage to using it, we did not implement this optimization.

\subsection{Other Possible Optimizations}

There are a number of other optimizations that we did not apply.  These included scalar replacement, precomputation of constants, and complicated matrix traversals, such as iterating over matrix items in a spiral pattern.  The reason we did not apply these optimizations is either due to implementation complexity or because they were not applicable.