\section{Results}
\label{sec:results}

\begin{figure}[h!]
\includegraphics[width=\textwidth]{./figs/franklin_gcc.pdf}
\caption{The results of running our optimized matrix multiplication code on Franklin for varying sized matrices as compared with a naive strategy, a simple blocking strategy, and the highly optimized library matrix multiplication.}
\label{fig:franklin}
\end{figure}

\begin{figure}[h!]
\includegraphics[width=\textwidth]{./figs/littlebear_gcc.pdf}
\caption{The results of running our optimized matrix multiplication code compiled with the GNU compiler on Little Bear for varying sized matrices as compared with a naive strategy, a simple blocking strategy, and the highly optimized library matrix multiplication.}
\label{fig:littlebear_gcc}
\end{figure}

\begin{figure}[h!]
\includegraphics[width=\textwidth]{./figs/littlebear_icpc.pdf}
\caption{The results of running our optimized matrix multiplication code compiled with the Intel compiler on Little Bear for varying sized matrices as compared with a naive strategy, a simple blocking strategy, and the highly optimized library matrix multiplication.}
\label{fig:littlebear_icpc}
\end{figure}

We first evaluated our tuned matrix multiply on Franklin.  The results in Figure~\ref{fig:franklin} show that our scheme of cache tiling and instruction scheduling provides in most instances a 2x speed up over the initial blocked code, achieving about 30\% peak performance overall at mid-to-large matrix sizes.  We make several observations. 

The first is that our code still suffers from the cache conflicts which arise with matrices whose size is a power of 2.  This could have been improved through copy optimizations, though we did not implement this in our code.  Secondly, fitting all three submatrices into the cache on a Franklin node requires an L2 block size of 208 or less.  We observed that a smaller L2 block size significantly sped-up computation of smaller matrix multiply operations, but was detrimental to the performance of large matrix multiply operations.  Thirdly, the L1 block size controls how many register operations are performed on each $C_{ij}$ before being written back to the array.  We hand-tuned this value in the vicinity of the theoretical value to minimize memflops.  A small L1 block size also improved the computation of smaller matrix multiply operations, but again slowed down large matrix computations due to the overhead associated with the excessive reading/writing of matrix C.  This implies that one interesting optimization strategy might include changing the block size based on the size of the computation at hand.  Additionally, we observed performance spikes at even-sized matrices.  This is due to the fact that we were able to use aligned loads for such matrices, while odd matrices necessitated unaligned loads, which are more costly.  Lastly, we note that choice of compiler and compiler flags is extremely important.  Our code performed very badly when using the Portland Group compiler, and improved considerably upon switching to the GNU compiler.  Thus, the results presented here are for the GNU compiler.  We chose compiler flags on the basis of recommendations and experimentations, the flags used can be found in the Makefile provided with this report.  Importantly, for all machines and all compilers, we observed a performance improvement on the order of 5Mflops with the use of the compiler profiling flags.

We also evaluated our code on a Dual-Core Intel Xeon processor machine dubbed Little Bear.  The parameters for Little Bear are also shown in Table~\ref{tab:machine_parameters}.  We tested our code using both the GNU compiler and the Intel compiler in this case. The results of these executions are shown in Figure~\ref{fig:littlebear_gcc} and Figure~\ref{fig:littlebear_icpc}, respectively.  We notice many of the same trends as seen on Franklin. With the Intel compiler we observe even better performance.   

