
\section{Implementation Detail}
\label{sec:implement}
\vspace{-5pt}
We implemented our pass and tested it within the LLVM infrastructure. 
In this section we shall describe the basic steps and their ordering.
\subsection{Prior Analyses and Transformations}
Our SLP pass requires a few transformations and analyses, which are already built into LLVM, to
expose parallelsim and maintain correctness.

\begin{itemize}
\item \textbf{Loop Unrolling:} Loop unrolling is used to transform vector parallelism 
into basic blocks with superword level parallelism.

\item \textbf{Loop Invariant Code Motion:} Performing LICM prior to unrolling would prevent duplication
of invariant statements when the loop is unrolled. We encountered this problem during the experiments and found out
that LICM prevents formation of redundant packs. For certain applications, such as VMM (vector-matrix multiply), LICM reduced the number of pack sets by a factor of 2. 

\item \textbf{Memory Dependence Analysis:} Dependence analysis before packing ensures that statements
within a group can be executed safely in parallel.

\item \textbf{Scalar Evolution:} Used to check if two memory locations are adjacent in memory or not, by evaluating their address.
\item \textbf{Others:} \textit{Simplify CFG, mem2reg, instruction combine, dead code elimination} 
are some of the standard optimizations we have applied to an input program. This ensures that parallelism 
is not extracted from computation that would otherwise be eliminated. It also, helps remove the redundant statements generated by SLP, such as Unpack and Pack statements between two consecutive vector instructions.
\end{itemize}  
%-debug -mem2reg -loop-rotate -licm -loop-unroll -unroll-count (UNROLL_COUNT) -unroll-threshold 1000 -simplifycfg -basicaa
%POSTOPTS = -instcombine -adce -stats -p

\subsection{Code Components}
While we have tried to maintain code readability by providing detailed comments in the code, here we shall list some of the important data structures and functions. We have just one globally defined data structure called \textit{PackSet}, which is a set of \textit{Pack(s)}. A \textit{Pack} is a vector of instruction pointers, where each pointer $p_i$ points to a instruction $s_i$, and all instructions $s_1,s_2,\ldots,s_n$ are independent isomorphic statements in a basic block. 

Following is the list of important functions in our pass.
\begin{itemize}
\item \textbf{\textit{findAdjRefs():}} It finds isomorphic statements in the basic block. If it finds two isomorphic statements. it creates a \textit{Pack} of length two, and puts it in the \textit{PackSet}.
\item \textbf{\textit{extendPacklist():}} This function extends the packs created by \textit{findAdjRefs()} using the use-def chains of operands used in these packs.
\item \textbf{\textit{combinePacks():}} After \textit{findAdjRefs()} and \textit{extendPacklist()} we have only packs of length 2 in the \textit{PackSet}. This function is responsible for merging two isomorphic packs into packs of larger size.
\item \textbf{\textit{schedule():}} Although, we checked the dependencies between two statements while creating packs, it is possible that after merging, two packs have cyclic dependency. Keeping this in mind, \textit{schedule()} tries to find a legal scheduling of all the singleton instructions and packs in the baic block.
\item \textbf{\textit{emitCode():}} This function is responsible for replacing packs with vector instructions in the LLVM code. It implements the algorithm discussed in section~\ref{sec:algorithm}.
\end{itemize}

\subsection{Testing strategy and status}
After having enough discussions on the SLP algorithm~\cite{SLP}, the next thing
we did was develop small micro-test kernels for various favourable as well as
unfavourable scenarios (such as memory dependencies, type casting etc.). These
micro kernels were used to test the robustness of our pass. Once we had
implemented our pass we ported the following 5 multimedia kernels and used them
to evaluate the performance of our slp pass. These are MMM (matrix-matrix multiply), VMM (vector-matrix multiply), FIR (finite impulse response), IIR (infinite impulse response), and YUV (converts RGB to YUV and vice versa).

The results and analysis is discussed in section~\ref{sec:eval}.

\subsection{Implementation status}

We fully implemented the SLP algorithm, and tested it in the micro and
multimedia benchmark. We also tried to import SPEC2006 benchmark but
unfortunately due to time constraints we were not able to compile it.

\subsection{A description of where to find your source code and how to build it}

We developed the SLP component as one individual dynamic linked library in the
same way as CS526.MP1. By using ``opt -load \$LIB\_PATH/MP2.so -slp'' can enable
our SLP component. The algorithm source file is in the lib directory. The
test directory includes the micro-test and multimedia test. The building steps
are 1) execute ``./configure --with-llvmsrc=\$LLVMROOT
--with-llvmobj=\$LLVMROOT'' in the source directory; 2) execute ``make'' to
compile the dynamic lib; 3) go into test/multimedia, and execute ``make
FIR.test.ll'' to generate the SLP vectorized FIR code, or execute ``make
FIR.result'' to generate the FIR executables and measure its performance.


