%-----------------------------------------------------------------------------
%
%               Template for sigplanconf LaTeX Class
%
% Name:         sigplanconf-template.tex
%
% Purpose:      A template for sigplanconf.cls, which is a LaTeX 2e class
%               file for SIGPLAN conference proceedings.
%
% Guide:        Refer to "Author's Guide to the ACM SIGPLAN Class,"
%               sigplanconf-guide.pdf
%
% Author:       Paul C. Anagnostopoulos
%               Windfall Software
%               978 371-2316
%               paul@windfall.com
%
% Created:      15 February 2005
%
%-----------------------------------------------------------------------------


\documentclass[nocopyrightspace]{sigplanconf}

% The following \documentclass options may be useful:
%
% 10pt          To set in 10-point type instead of 9-point.
% 11pt          To set in 11-point type instead of 9-point.
% authoryear    To obtain author/year citation style instead of numeric.

\usepackage{amsmath}

\begin{document}


\title{Project Phase I: Redundant Array Bounds Check Removal}
\subtitle{CS 6241 : Advanced Compiler Optimizations}

\authorinfo{Cong Hou}
           {Georgia Tech}
           {GT-ID: 902532366}
\authorinfo{Xiao Yu}
           {Georgia Tech}
		   {GT-ID: 902641982}
\authorinfo{Xin Zhang}
           {Georgia Tech}
           {GT-ID: 902763408}
\maketitle



\section{Introduction}
In this project, we insert array bounds checking for each array reference in the code to make the code safer, and also, to reduce the performance degradation, we implemented the redundant array bounds check removal algorithm invented by Bod\'{\i}k and et al.\cite{bodik}. Our whole algorithm can be divided into following phases:
\begin{enumerate}
\item Indentify array reference and insert bounds check.
\item Build e-SSA described in \cite{bodik}.
\item Build inequality graph for each function and use it to detect and remove redundant array bounds check.
\end{enumerate}

To evaluate the effectiveness of our approach, we tested it on the mediabench of llvm and compared our approach with global value numbering+partial redundancy elimination.

\section{Approach}
\subsection{Array Bounds Check Insertion}
Each array indexing instruction is
represented by $GEP$ instructions in the LLVM IR. For each of those
instructions, we split the corresponding basic block into two blocks, insert a
$CmpInst$ and a $BranchInst$ at the end of the predecessor. We then set the two
targets of the $BranchInst$ to the successor and a newly created
"Check\_Failed" basic block.
\subsection{e-SSA Build}
 We represent $\pi$ nodes with $\phi$ nodes. For
each operand of each $CmpInst$ and its corresponding $BranchInst$ we insert two
$\pi$ nodes in the two successor blocks if the operand is not a constant value.
After the insertion, the newly created $\pi$ node is a new definition of the
original name. Therefore we should propagate the definition to the rest of the
CFG. Assuming the original name is $x_0$, the $\pi$ node gives a new name $x_\pi$,
the following cases are considered in the propagation:

\begin{itemize}

\item If the definition $x_\pi$ dominates a use of $x_0$, then this use should
be replaced with $x_\pi$.

\item If a use of $x_0$ is on the dominance frontier of $x_\pi$ and the use is
a $\phi$ node and the corresponding block of that use is dominated by $x_\pi$,
then it should also be replaced.

\item If a block is on the dominance frontier of $x_\pi$, but is dominated by
$x_0$ then we add a $\phi$ node at the beginning of the block. This will induce
unnecessary insertion of $\phi$ nodes because there might be no use of $x_0$ in
the following blocks. We leave the elimination of those unnecessary insertion
for the future work.

\item If the dominance frontier of $x_\pi$ dominates the definition itself,
this means there is a back-edge from the dominance frontier to the $CmpInst$.
In this case, we do not add new $\phi$ nodes because this $\phi$ nodes cannot carry
any inequality relation information.

\end{itemize}
\subsection{Inequality Graph Build}
After inserting the PiNode and transforming the program into an e-SSA form, we can convert the constraints over the program variables into a single, flow-insensitive constraint system. The system is represented as an inequality graph. The inquality graph is a directed and weighted graph. The inequality graph $G(V,E)$ is defined as follow:
\begin{enumerate}
\item A vertex $V$ is a variable or a numeric constant. We only work on static allocated arrays, and the lengths of these arrays are constant during the compilation. As a consequence, we do not distinguish array lengths and other numeric constants, which is slightly different from the inequality graph described in \cite{bodik}. Also, if $V$ is a variable, it can be divided into two types: PhiNode and non-PhiNode. We need to distinguish these two since we need to gurantee the soundness of our constraint solver.
\item An edge $V$ represents an inequaly relation between its source $S$ and destination $D$. Suppose its weight is $c$, the inequal relation the edge represents is: $D - S \leq c$.
\end{enumerate}
In order to build the inequality graph, we need to handle the following types of instructions in the program:
\begin{enumerate}
\item Phi instruction (llvm:PHINode). We need to add two edges $l \xrightarrow{0}d$ and $r\xrightarrow{0}d$ for instruction $d = \phi(l,r)$. 
\item Plus and sub instructions (llvm::Instruction::Add, llvm:: Instruction::Sub, llvm::Instruction::FAdd, llvm::Instruction::FSub). We need to add an edge $s \xrightarrow{c}d$ for instruction $d = s + c$. Due to time limit, we didn't handle float operations. However, it won't be hard to extend our current solution to handle float operations and values. One way to solve it is to cast float numbers to integers and choose whether to cast it to the lower bound or upper bound according to the given operation.
\item Conditional branches (llvm::CmpInst and llvm::BranchInst). Since the variable in the CmpInst has different constraints over different branches, we need to find the PiNode of each variables in the successing basic blocks. Suppose the compare instructin is $x<y$ and both x and y are variables, we need to insert 6 edges: $x\xrightarrow{0}\pi_t(x)$, $x\xrightarrow{0}\pi_f(x)$, $y\xrightarrow{0}\pi_t(y)$, $y\xrightarrow{0}\pi_f(y)$, $\pi_t(y)\xrightarrow{-1}\pi_t(x)$, $ \pi_f(x)\xrightarrow{0}\pi_f(y)$. If y is a constant, $\pi_f(y)$ and $pi_t(y)$ would be replaced by $y$.
\item We do not insert special vertexes and edges for array allocation, since array length is treat  the same as other numeric constant. And our constraint solver can handle any numeric constan no matter whether it equals to a existing vertex.
\item Array-bounds checks are implemented as normal compare and branch statements, so that we do not any special handling here.
\item We also do not handle constant assignment since in the SSA transformation of LLVM, LLVM will automatically perform constant propagation and copy propagation.
\end{enumerate}
\subsection{Constraint Solve}
We implemented the algorithm discribed on Page 9 of \cite{bodik} with a slight modification. The reason why we modify it is that the algorithm doesn't tell us how to build the inequality relations between constants. For example, we have x<5 in the system, and the current query is x<10. If there's no path from x to 10, we cannot prove this query using the algorithm described in \cite{bodik}. One way to solve it, is to add edges between every two constants, which may lead to a graph explosion. Our solution only slightly changes the existing algorithm. In the head of prove(vertex a,vertex v,int c), if a and v are both literal vertexes, we simply return $v - a \leq c$. Thus, when we have $x<5$ in the system and ask about $x<10$, the prover would eventually transform the query into prove(5,10,-1), which can be handle by our algorithm even there's no path between constant vertex 5 and 10.
\section{Experiment}
\subsection{Experiment Setup}
The passes listed in Table~\ref{tab:passes} are constructed for this project. Passes should be run in
the order listed.\\
\begin{table}[!h]
\begin{centering}
\begin{tabular}{|p{2cm}|l|p{3.5cm}|}
\hline
Pass Name & Option Flag & Description \\
\hline
Array Bound Check Insertion& -abc & Inserting array bound check for each static assigned array. \\
\hline
Promote Memory to Register & -mem2reg & Promoting memory variables to register. \\
\hline
$\pi$ Node Insertion & -pinode & Inserting $\pi$ nodes and propagate the modification. \\
\hline
Array Bound Check Elimination & -abcd & Eliminating array bound checks. \\
\hline
\end{tabular}
\end{centering}
\caption{Array Bound Check Elimination Passes}
\label{tab:passes}
\end{table}
To test our approach, we first run our implementation alone and then combine our approach with global value numbering and partial order elimination provided by LLVM (-gvn). We measure the number of eliminated array bound checks, code size, run time and compile time.
\subsection{Experiment Result and Discussion}
\begin{table*}[!h]
\begin{center}
\begin{tabular}{|c|c|p{3cm}|p{3cm}|p{3cm}|}
\hline
Benchmark & Input & array bounds check added after -abc & previous & current\\
\hline
cjpeg.linked & testimg.ppm & 987 & 707 & 768\\
\hline
cjpeg.llvm & testimg.ppm & 490 & 304 & 391\\
\hline
encode.linked& clinton.pcm &78 &54 & 69\\
\hline
encode.llvm&clinton.pcm &83 &6 &80\\
\hline
mpeg2.linked & meil6v2.m2v & 467(431) & 341 &467\\
\hline
mpeg2.llvm & meil6v2.m2v & 396(360) & 287 &267\\
\hline
rawcaudio.linked & clinton.pcm & 6 & 0 & 3\\
\hline
rawcaudio.llvm & clinton.pcm &  3 & 0 & 0\\
\hline
rawdaudio.linked & clinton.pcm & 6 & 0 & 3\\
\hline
rawdaudio.llvm & clinton.pcm & 3 & 0 & 1\\
\hline
toast.linked & clinton.pcm & 525 & 509 & 510\\
\hline
toast.llvm & clinton.pcm & 350 &305 & 319\\
\hline
\end{tabular}
\end{center}
\caption{Redundant Array Bound Check Eliminated}
\label{tab:abc}
\end{table*}

\begin{table*}[!h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Benchmark & Size & Size After -abc & Size After -gvn &Size After -gvn -abcd  \\
\hline
cjpeg.linked & 516K & 572K & 564K & 460K \\
\hline
cjpeg.llvm & 188K & 216K & 212K & 232K\\
\hline
encode.linked& 24K &28K &28K &24K \\
\hline
encode.llvm&16K &20K &20K &20K \\
\hline
mpeg2.linked & 120K & 148K & 144K &132K\\
\hline
mpeg2.llvm & 88K & 112K & 108K & 112K\\
\hline
rawcaudio.linked &8K&  8K &8K & 8K\\
\hline
rawcaudio.llvm & 4K & 4K & 4K & 4K\\
\hline
rawdaudio.linked &8K&  8K &8K & 8K\\
\hline
rawdaudio.llvm & 4K & 4K & 4K & 4K\\
\hline
toast.linked & 120K & 148K & 140K & 104K\\
\hline
toast.llvm & 72K & 92K &92K&88K\\
\hline
\end{tabular}
\end{center}
\caption{Redundant Array Bound Check Eliminated}
\label{tab:size}
\end{table*}

\begin{table*}[!h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
Benchmark & Run Time(s) & Rum Time After -abc & Run Time After -gvn &Run Time -gvn -abcd  \\
\hline
cjpeg.linked & 1.754 & 1.717 & 1.397 & 1.350 \\
\hline
cjpeg.llvm & 2.511 & 3.579 & 2.827 & 2.054 \\
\hline
encode.linked& 0.437 & 0.522 &0.332 &0.305 \\
\hline
encode.llvm & 0.276 & 0.434 & 0.297 & 0.277 \\
\hline
mpeg2.linked & 0.854 & 1.458 & 1.069 & 1.114 \\
\hline
mpeg2.llvm & 1.559 & 0.278,f & 0.122,f & 0.126,f \\
\hline
rawcaudio.linked &0.053&  0.045 &0.046 & 0.044\\
\hline
rawcaudio.llvm & 0.038 & 0.043 & 0.042 & 0.042\\
\hline
rawdaudio.linked &0.049&  0.054 &0.045 & 0.044\\
\hline
rawdaudio.llvm & 0.036 & 0.042 & 0.041 & 0.041\\
\hline
toast.linked & 0.843 & 1.621 & 0.828 & 0.786\\
\hline
toast.llvm & 1.066 & 2.94 & 1.355 & 1.176 \\
\hline
\end{tabular}
\end{center}
\caption{Run Time(s) of the Transformed Program. *,f is the run time when program fails at a bound check}
\label{tab:performance}
\end{table*}

\begin{table*}[!h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Benchmark & -abc & -gvn & -mem2reg & -pinode& -abcd\\
\hline
cjpeg.linked & 0.328 & 0.9761 & 0.312 & 0.332 &0.256 \\
\hline
cjpeg.llvm & 0.128 & 0.336 & 0.112 & 0.164& 0.128\\
\hline
encode.linked& 0.02 &0.048 &0.016 &0.02 & 0.016\\
\hline
encode.llvm &0.012 &0.032 &0.012 &0.016&0.012\\
\hline
mpeg2.linked & 0.02 & 0.048 &0.016 &0.016&0.012\\
\hline
mpeg2.llvm & 0.012 & 0.028 & 0.012 & 0.016 & 0.012\\
\hline
rawcaudio.linked &0.004&  0.008 &0.004 & 0.004&0.004\\
\hline
rawcaudio.llvm & 0 & 0 & 0 & 0 & 0\\
\hline
rawdaudio.linked &0.004&  0.008 &0.004 & 0.004&0.004\\
\hline
rawdaudio.llvm & 0 & 0 & 0 & 0&0\\
\hline
toast.linked & 0.096 & 0.216 & 0.084 & 0.088 & 0.052\\
\hline
toast.llvm & 0.048 & 0.112 &0.044&0.056 &0.04\\
\hline
\end{tabular}
\end{center}
\caption{Compile Time(s) of Each Pass}
\label{tab:compile}
\end{table*}

In the result, we can see that the -gvn option alone cannot delete any redundant array bounds check (so we do not list them in the table) and can only reduce the file size for a limited amount. Our approach alone can already remove many redundant checks, but with -gvn, it can remove even more.For most benchmarks, the size of the bitcode has the same pattern of variation
after each pass.  The bitcode size after the array bound check pass "$-abc$" is
larger than the unoptimized code.  The size is reduced after "$-gvn$" and even
reduced after "$-abcd$". The runtime of benchmarks also have a similar
variation pattern, that is, the runtime increases after "$-abc$" and decreases
after $-gvn$ and "$-abcd$". The "$-gvn$" pass takes most of the time of all the 
optimization passes. The "$-abcd$" pass which solves the inequality constraints
and removes redundant check surprisingly does not take much time. This is
probably because the inequality graph is relatively small and sparse.
\section{Conclusion}
In this project, we inserted array bounds check for C code and implement the algorithm of redundant array bounds check elimination described in \cite{bodik} successfully. Moreover, we test our approach on mediabench of LLVM with global number valuing and partial redundancy elimination.



% We recommend abbrvnat bibliography style.

\bibliographystyle{abbrvnat}

% The bibliography should be embedded for final submission.

\begin{thebibliography}{}
\softraggedright

\bibitem{bodik}
R Bod\'{\i}k, R. Gupta and M. L. Soffa. ABCD: Eliminating Array Bounds Check on Demand, PLDI 2000.

\end{thebibliography}

\end{document}
