\vspace{-5pt}
\section{Evaluation}
\label{sec:eval}
\subsection{Evaluation Methodology}
\vspace{-5pt}
We evaluated our SLP pass on both, the EWS machine and our own
server. Here is the configuration of our server. It is equipped with Intel
Xeon E3 1245 processor\cite{Intel:E3-1245}, which supports both SSE4.2 and AVX1
SIMD instruction sets. The operating system is Fedora Linux 16 with Linux Kernel
version 3.1.0-0.rc10.git0.1 running in X86\_64 mode. And we used LLVM svn head
version r155800 to compile and run our auto-vectorization pass.

The whole compiling process is: 1) Use clang to generate raw .bc file; 2) Use
opt to do code optimization; 3) Use llc to generate assembly code .s file; 4)
Use clang to generate executable files from assembly code file. We only use
optimization flags in step 2, and here are the flags

\begin{itemize}
\item \textbf{Pre Vec Opts}:  -mem2reg -loop-rotate -licm -loop-unroll
-unroll-count 4 -unroll-threshold 1000 -simplifycfg -basicaa
\item \textbf{Vec Opts}: -slp or -bb-vectorize or nothing
\item \textbf{Post Vec Opts}: -instcombine -adce
\end{itemize}

As mentioned in SLP algorithm, our auto-vectorization pass requires some
pre-transformation and post-transformation, which are described in \textbf{Pre
Vec Opts} and \textbf{Post Vec Opts}. And we use \textbf{Vec Opts} to select the
vectorization method.

In order to get the precise performance results, we evaluated the
performance in four running configurations, which are

\begin{itemize}
\item \textbf{RAW}: No optimizations.
\item \textbf{NO-VEC}: Use all optimization flags described above, but no vectorization flag
\item \textbf{BB-VEC}: the LLVM built-in auto-vectorization \cite{bb-vectorize}.
\item \textbf{SLP-VEC}: our implementation of SLP algorithm
\item \textbf{LLVM-O2}: use opt -O2 to optimize the benchmark
\end{itemize}

All the micro-test and multimedia benchmark test are passed in both EWS server
and our own server. Because the EWS machine is not a quite
server, all the reported performance data here was collected in our own server.


\subsection{Experiment Result}

The experimental results are presented in table \ref{tab:multimedia_perf}. The
execution time unit is second.

\begin{table}
\centering
\caption{Executing Time of Multimedia Benchmark(Unit:S)}
\label{tab:multimedia_perf}
\begin{tabular}{|l|r|r|r|r|r|} \hline
Benchmark & RAW & NO-VEC & BB-VEC & SLP-VEC & LLVM-O2 \\
\hline FIR & 0.188 & 0.164 & 0.164 & 0.171 & 0.062 \\
\hline IIR & 0.237 & 0.221 & 0.265 & 0.227 & 0.141 \\
\hline MMM & 0.790 & 0.215 & 0.468 & 0.207 & 0.274 \\
\hline VMM & 0.390 & 0.355 & 0.352 & 0.377 & 0.126 \\
\hline YUV & 0.096 & 0.059 & 0.030 & 0.034 & 0.034 \\
\hline 
\end{tabular}
\end{table}
\vspace{-10pt}
\subsection{Analysis of the Result}

One finding is that the gain from vectorization varies in different benchmarks and
different configurations. (1) SLP-VEC performs well in MMM, (2) while LLVM-O2 runs much
faster in FIR, IIR, VMM, and (3) BB-VEC, SLP-VEC and LLVM-O2 performance is comparable for YUV. 
It must be mentioned that comparing the vectorization SLP-VEC
and BB-VEC with no vectorization NO-VEC, we learnt that vectorization even causes slow down,
for example BB-VEC in MMM is much slower than NO-VEC, and for VMM our pass SLP-VEC and BB-VEC perform 
slightly worse than NO-VEC.

We carefully studied the reasons that why the SLP auto-vectorization's
performance is not as good as expected in such cases.

First finding is SLP performs well if there are a lot of isomorphic packs
identified, and no packs broken later. Table ~\ref{tab:packs_stat} shows the
raw Packs identified, and Packs broken due to either memory dependence 
or data dependence violation. Only isomorphic packs can be translated into SIMD
instructions. If there are SIMD and scalar instruction mixed, we need insert a
lot of packing and unpacking statements, which causes a lot of overhead. As a
result, the overhead offset the gain from SIMD.

\begin{table}
\centering
\caption{Packs identified in SLP algorithm}
\label{tab:packs_stat}
\begin{tabular}{|l|r|r|r|r|r|r|} \hline
Benchmark & FIR & IIR & MMM & VMM & YUV & FIR(refine) \\
\hline Raw Packs & 44 & 96 & 60 & 44 & 234 & 56 \\
\hline Broken Packs & 24 & 40 & 0 & 24 & 0 & 0 \\
\hline SIMD Packs & 20 & 56 & 60 & 20 & 234 & 56 \\
\hline 
\end{tabular}
\end{table}
\vspace{-5pt}
We studied the reason of breaking packs. One frequent pattern in the multi-media
benchmark is 
''a[i] = a[i] + some computation''. This memory dependence forces SLP to
break some packs, and insert packing and unpacking statements. In order to proof
our assumption, we slightly changed the FIR into a refined version, replacing the
pattern with ``b[i] = some computation'' and ``a[i] = a[i] + b[i]" to break the dependence. Then we perform loop distribution manually on this to break these two statements into different loops. We find
there is no broken patterns now, and all 56 Raw packs are SIMD packs,
table~\ref{tab:packs_stat}. The performance also gains,
table~\ref{tab:FIRr_perf} in both BB-VEC and SLP-VEC. The performance loss of
LLVM-O2 is from larger loop body in the refined FIR.

\begin{table}
\centering
\caption{Executing Time of Refined FIR (Unit:S)}
\label{tab:FIRr_perf}
\begin{tabular}{|l|r|r|r|r|r|} \hline
Benchmark & RAW & NO-VEC & BB-VEC & SLP-VEC & LLVM-O2 \\
\hline FIR & 0.188 & 0.164 & 0.164 & 0.171 & 0.062 \\
\hline FIR(refine) & 0.326 & 0.100 & 0.080 & 0.091 & 0.117 \\
\hline 
\end{tabular}
\end{table}
\vspace{-5pt}
The seconding finding is the computation kernel (loop body) of FIR, IIR and VMM
is very small, and LLVM-O2 can generate a very tight loop body. While
NO-VEC, BB-VEC and SLP-VEC will perform loop-unrolling, which enlarges the loop
body. After checking the assembly code of different configurations, we found the
register allocation of the tight loop body from LLVM-O2 performs much better
than the other three, which is the reason that LLVM-O2 runs much faster in these
benchmarks. This should be take into account in any vectorization algorithm.
Because both SLP algorithm and the LLVM built-in vectorization algorithm
performs in architecture independent optimization phase.





