\documentclass{acmsig-alternate-10pt}

\input{source/includes}

\let\ctextfont=\tt
\lstset{basicstyle=\tt}

\newcommand{\LB}{\textrm{\L}}

\begin{document}

% --- Author Metadata here ---
\conferenceinfo{PACT}{'12 Minneapolis, MN, USA}
% --- End of Author Metadata ---

\title{Improved Loop Tiling based on the Analysis of Live Range Interference
and the Removal of Spurious False Dependences}
%\subtitle{}

% You need the command \numberofauthors to handle the 'placement
% and alignment' of the authors beneath the title.
%
% For aesthetic reasons, we recommend 'three authors at a time'
% i.e. three 'name/affiliation blocks' be placed beneath the title.
%
% NOTE: You are NOT restricted in how many 'rows' of
% "name/affiliations" may appear. We just ask that you restrict
% the number of 'columns' to three.
%
% Because of the available 'opening page real-estate'
% we ask you to refrain from putting more than six authors
% (two rows with three columns) beneath the article title.
% More than six makes the first-page appear very cluttered indeed.
%
% Use the \alignauthor commands to handle the names
% and affiliations for an 'aesthetic maximum' of six authors.
% Add names, affiliations, addresses for
% the seventh etc. author(s) as the argument for the
% \additionalauthors command.
% These 'additional authors' will be output/set for you
% without further effort on your part as the last section in
% the body of your article BEFORE References or any Appendices.

\numberofauthors{4}

\author{}
%\author{Riyadh Baghdadi  
%   \and Albert Cohen
%   \and Sven Verdoolaege
%   \and Konrad Trifunovi\'c}

\maketitle

\begin{abstract}
\input{source/abstract}
\end{abstract}

\section{Introduction and Related Work}

% Multi-core processors are now in widespread use in almost all areas of
% computing: desktops, laptops and accelerators such as GPGPUs.
To harness the computing resources of multiple cores with complex
memory hierarchies, the need for powerful compiler optimizations and
especially loop nest transformations is high.  Loop
transformations operate on the fine-grain schedule of the statement
instances (iterations) executed in a loop nest. Data-dependences among
these statement instances need to be preserved.  Two types of data
dependences exist, true (a.k.a.\ data-flow) and false (a.k.a.\
memory-based) dependences \cite{kennedy_optimizing_2002}.
False dependences are induced by the reuse of temporary variables
across statement instances.  These false dependences %%!!xx!!
eliminate degrees of freedom that may be essential to the expression
of effective loop nest transformations.

Scalar and array expansion techniques --- including renaming and
privatization --- propose to remove false dependences at the expense
of memory footprint
\cite{feautrier_array_1988,kennedy_optimizing_2002}.  Besides memory
footprint, scalar and array expansion also degrade temporal locality
\cite{thies_unified_2001}. Scalar expansion is particularly harmful as
it converts register arguments into memory operations \cite{Cal90}.

A family of array contraction techniques attempts to reduce the memory
footprint without constraining loop nest transformations
\cite{lefebvre_automatic_1998,Qui00}: the compiler performs a maximal
expansion, looks for transformations, and then attempts to contract
the arrays. By performing a maximal expansion, false dependences are
eliminated. Yet contraction is not always possible when unrestricted
loop transformations have been applied, as the set of simultaneously
live values may effectively require high-dimensional arrays to store
them.

Several alternative approaches have been proposed to constrain the
expansion a priori. In the presence of dynamic control flow, maximal
static expansion restricts the elimination of dependences to the
situations where the data flow can be captured accurately at
compilation time \cite{cohen_optimization_1998}; it can be generalized
to other memory expansion constraints and combined with array
contraction \cite{cohen_parallelization_1999}. A priori constraints on
memory footprint can also be enforced, up to linear volume
approximations \cite{thies_unified_2001}, and more generally,
trade-offs between parallelism and storage allocation can be
explored. These approaches are particularly interesting when adapting
loop nests for execution on hardware accelerators and embedded
processors with local memories.

In this paper we propose and
evaluate a technique allowing compilers to
safely ignore false
dependences and enable tiling.  Our technique is based
on the concept of live range non-interference:
it decides which false dependences can be safely ignored in the
context of a given affine, tiling transformation. Unlike
other works, the proposed technique does not
incur any costs of scalar or array expansion.

Section~\ref{sources_fdeps} shows possible sources of false dependences
in a kernel.
Section~\ref{sec:motivation} shows that the classical tiling condition
may be too restrictive and forbids safe tiling
opportunities. Sections~\ref{sec:interference}
and~\ref{sec:sufficient_cnd} introduce a new tiling test that avoids this
problem and allows the tiling of kernels with false dependences.
Sections~\ref{design_implem} and \ref{experiments}
show an overview of an implementation in
Pluto and experimental results on the Polybench suite.

\section{Sources of false dependences}
\label{sources_fdeps}
One common source of false dependences is found in temporary variables
introduced by programmers in the body of a loop.  Even in source programs
that do not initially contain scalar variables, compiler passes may
introduce false dependences when upstream transformations introduce
scalar variables. This is a practical compiler construction issue, and
a high priority for the effectiveness of loop optimization frameworks
implemented in production compilers \cite{konrad_elimination_2011}.
Among upstream compiler passes generating false dependences, we will
concentrate on the most critical ones, as identified by ongoing
development on optimization frameworks such as the GRAPHITE
\cite{pop_graphite:_2006,trifunovic_graphite_2010} polyhedral
optimization framework in GCC:
%
\begin{itemize}
\item Transformation to Three-Address Code (3AC).\\GRAPHITE is an
  example of a loop optimization framework operating on low level
  three-address instructions. It is affected by the false dependences
  that result from a conversion to three-address code.
\item Partial Redundancy Elimination (PRE) \cite{Kno94} applied to
  array operations removes invariant loads and stores, promoting array
  accesses into scalars.  Figures~\ref{code:gemm}
  and~\ref{code:gemm-pre} show an example of this optimization.
\item Loop-invariant code motion is a common compiler optimization
  that moves loop-invariant code outside the loop body eliminating
  redundant calculations
  (Figures~\ref{code:example-invariant-code-mot-before}
  and~\ref{code:example-invariant-code-mot-after}).  The side effect
  of this optimization is that it adds new scalars to the loop body
  inhibiting important transformations such as loop tiling. In
  Figure~\ref{code:example-invariant-code-mot-after} the new scalar
  inhibits tiling of the outer two loops.
\end{itemize}

\begin{figure}[h!tb]
\begin{cprog}
for (i = 0; i < ni; i++)
  for (j = 0; j < nj; j++) {
S1:  C[i][j] = C[i][j] * beta;
      for (k = 0; k < nk; ++k) {
S2:     C[i][j] += alpha * A[i][k] * B[k][j];
      }
  }
  \end{cprog}
  \caption{\label{code:gemm}\emph{Gemm} kernel}
\end{figure}

\begin{figure}[h!tb]
   \begin{cprog}
for (i = 0; i < ni; i++)
  for (j = 0; j < nj; j++) {
S1:  t = C[i][j] * beta;      
     for (k = 0; k < nk; k++) {
S2:    t += alpha * A[i][k] * B[k][j];
     }
S3:  C[i][j] = t;
    }
  \end{cprog}
  \caption{\label{code:gemm-pre}\emph{Gemm} kernel after applying PRE}
\end{figure}


\begin{figure}[h!tb]
  \begin{cprog}
for (i = 0; i < ni; i++)
  for (j = 0; j < nj; j++)
    for (k = 0; k < nk; k++) {
      A[i][j][k] = B[i][j] * C[i][j]
                           * D[i][j];  
    }
  \end{cprog}
  \caption{\label{code:example-invariant-code-mot-before}
  Original code before applying loop-invariant code motion}
\end{figure}

\begin{figure}
   \begin{cprog}
for (i = 0; i < ni; i++)
  for (j = 0; j < nj; j++) {
    t = B[i][j] * C[i][j] * D[i][j];
    for (k = 0; k < nk; k++) {
      A[i][j][k] = t;  
    }
  }
  \end{cprog}
  \caption{\label{code:example-invariant-code-mot-after}
  Code after applying loop-invariant code motion}
\end{figure}

In the case of GCC, providing loop nest optimizations directly on a
three-address code provides many benefits.  It enables a tight
integration of loop optimization techniques with downstream
compilation passes, including an automatic vectorization,
parallelization and memory optimizations.  The ideal case for an
optimization framework is to operate on a low enough representation to
benefit from all the passes operating on three-address code, and at
the same time being able to ignore spurious false dependences
generated by these passes.

Loop tiling, and affine transformations enabling loop tiling, are of
the utmost importance in the adaptation of the grain of parallelism and
the exploitation of temporal locality
\cite{Iri88,bondhugula_practical_2008}. Unfortunately, these
polyhedral compilation techniques are also highly sensitive to the
presence of false dependences. Although it is possible to
perform PRE and loop-invariant code
motion after tiling in some compilers (yet this is not possible
in GCC), transformation to 3AC must still happen
before any loop nest optimization in order to benefit
from all the passes operating on three-address code
but this limits the chances to apply tiling.

Our goal is to propose and evaluate a technique allowing compilers to
escape this loophole, providing more relaxed conditions to safely
ignore false dependences and enable tiling.

\section{Motivating Example}
\label{sec:motivation}

Tiling is possible when a loop band (a group of consecutive loop
levels) is fully permutable.  To apply tiling, the compiler needs
to identify a band where dependences are forward. This can be done by
calculating dependence directions for all dependences and for each
loop level. A tilable loop band is composed of consecutive loop
levels with no negative loop dependence.  When a loop band is
identified, dependences that are strongly satisfied by this band are
dropped before starting the construction of a new inner loop band (a
dependence is strongly satisfied if the sink of the dependence is
scheduled strictly after the source of the dependence in one of the
loop levels that are a part of that band).  Loop levels within
the identified loop bands are permutable: they can be freely
interchanged or permuted among themselves, which enables tiling.

Figure~\ref{fig:gemm_deps} shows data dependences on the scalar
\texttt{t} for the \emph{gemm} kernel presented in
Figure~\ref{code:gemm-pre}.  It
contains both true (read after write) dependences and false
(write after write and write after read) dependences.  
The figure shows a
simplified version of the dependences between statements in two
iterations \texttt{j} and \texttt{j+1}.  Many false dependences
stem from the fact that the same temporary scalar \emph{t} is overwritten
in each iteration of the loop.  These dependences enforce a sequential
execution.

\begin{figure}[h!tb]
 \centering
 \includegraphics[bb=0 0 689 453,scale=0.30,keepaspectratio=true]{./figures/deps/gemm_deps.pdf}
 \caption{Dependences in \emph{gemm} kernel}
 \label{fig:gemm_deps}
\end{figure}

In this example, the classical tiling validity test prohibits the
application of tiling, although tiling is valid as we show in
Figure~(\ref{code:gemm-tiled}).

\begin{figure}[h!tb]
  {\footnotesize
  \begin{cprog}
#define B 32
for (t1=0; t1<=floor((ni-1)/B); t1++)
  for (t2=0; t2<=floor((nj-1)/B); t2++)
    for (t3=B*t1; t3<=min(ni-1,B*t1+31); t3++)
      for (t4=B*t2; t4<=min(nj-1,B*t2+31); t4++)
      {
   S1:  t = C[t3][t4] * beta;
        for (t6=0; t6<nk; t6++)
   S2:    t = t + alpha * A[t3][t6] * B[t6][t4];
   S3:  C[t3][t4] = t;
      }
\end{cprog}}
\caption{\label{code:gemm-tiled}Tiled version of the \emph{gemm} kernel.}
\end{figure}

Our goal is to enable the compiler to perform tiling on codes that
contain false dependences, by ignoring false dependences when this is
possible, and not by eliminating them through renaming or
privatization.

In the next section, we introduce the concept of live range
non-interference.  We use this concept later to specify when is it
valid to ignore false dependences.

\section{Live Range Non-Interference}
\label{sec:interference}

We define \emph{live ranges} with respect to dynamic statement
instances. In a given sequential program, a value is said to be
\emph{alive} in the range of statement instances between its
definition instance and its last use instance.  The definition
instance is called the \emph{source} of the live range, marking its
beginning, and the last use is called the \emph{sink}, marking the end
of the live range.

Since the execution of a given program is associated with a possibly unbounded
set of live ranges, we are interested in loop nests with affine
bounds and conditional expressions (static control program parts); we
consider \emph{live range classes} defined as affine mappings between
the source and sink of a live range.

\subsection{Basic Notation}

$(S_{k}(I), S_{k'}(I'))$ defines a \emph{live
range class} beginning with an instance of the write statement $S_{k}$ and
ending with an instance of the read statement $S_{k'}$.  $I$ and
$I'$ are iteration vectors identifying the specific instances of
the two statements.

\begin{figure}[h!tb]  
  \begin{cprog}    
  for (i = 0; i < n; i++)
    for (j = 0; j < n; j++) {
 S1:  t = A[i][j] * y_1[j];
 S2:  x1[i] = x1[i] + t;
    }
\end{cprog}
\caption{One loop from the \emph{mvt} kernel}
\label{code:mvt}
\end{figure}

Here is an example of a live range class for the scalar \texttt{t} 
of the \emph{mvt} kernel shown in Figure~\ref{code:mvt}:
$$(S_{1}(i,j), S_{2}(i,j))\quad\text{s.t. $0\le i<n,~0\le j<n$}$$
%
For each iteration of $i$ and $j$, the above live range class
begins with the write statement $S_{1}$ and terminates with the read
statement $S_{2}$.  One instance of this live range class is the
following live range:
$$(S_{1}(0,0), S_{2}(0,0))
$$
It begins in the iteration $i=0$, $j=0$ and terminates in 
the same iteration.

\subsection{Live Range Non-Interference}

Any loop transformation is correct if it does not lead to live range
interference
\cite{vasilache_scalable_2007,trifunovic_graphite_2010,konrad_elimination_2011},
i.e., if no two live ranges overlap.
If we want to guarantee the non-interference of two live
ranges, we have to make
sure that the first live range is scheduled either
before or after the second live range.
%
Figure~\ref{fig:example_non_interference_interval_with_interval.2} shows
two examples where a pair of live ranges do not interfere.
Figure~\ref{fig:example_non_interference_interval_with_interval.1}
shows two examples where this pair of live ranges does interfere.
%
Our new technique is based on the careful analysis of the conditions
for preserving the non-interference across affine loop transformations.

\begin{figure}[h!tb]
 \centering
 \includegraphics[scale=0.25,keepaspectratio=true]
  {./figures/example_non_interference_interval_with_interval_2.pdf}
 \caption{Examples where the two live range classes
 \mbox{$(S_{1}(I), S_{2}(I'))$} and 
 \mbox{$(S_{3}(J), S_{4}(J'))$} do
 not interfere (correct schedules)}
 \label{fig:example_non_interference_interval_with_interval.2}

 \medskip
 \centering
 \includegraphics[scale=0.25,keepaspectratio=true]
  {./figures/example_non_interference_interval_with_interval_1.pdf}
 \caption{Examples where the two live range classes
 \mbox{$(S_{1}(I), S_{2}(I'))$} and
 \mbox{$(S_{3}(J), S_{4}(J'))$}
 interfere (incorrect schedules)}
 \label{fig:example_non_interference_interval_with_interval.1}
\end{figure}

We can make several observations about live range non-interference.
%
\begin{itemize}
\item Any affine transformation --- including tiling --- is valid if
  it preserves data flow dependences and if it does not introduce live
  range interferences.
\item Tiling is composed of two basic transformations: loop strip-mining
  and loop interchange.
  \begin{itemize}
  \item Strip-mining is always valid because it
  does not change the execution order of statement instances
  and thus it can be applied unconditionally.
\item Loop interchange changes the order of iterations, and thus it
  may break live ranges.  But a closer look shows that it never
  changes the order of statement instances within one iteration of the
  loop body.
  \end{itemize}
\item If live range classes are private to one iteration of a given loop
  (i.e., live
  ranges begin and terminate in the same iteration of that loop), changing the
  order of iterations of the present loop or any outer loop
  preserves the non-interference of live ranges.
\item The only purpose of false dependences is to inhibit live ranges
  from interfering.  If a
  live range class is private to one iteration, it is therefore safe to ignore
  false dependences on that live range class as far as statements in
  this class are subjected to the same affine transformation. In
  particular, it is safe to permute loops and tile loops enclosing
  these statements.
\end{itemize}

\subsection{Illustrative Examples}

Let us illustrate the preceding observations on a few additional examples.

\subsubsection{The \emph{mvt} and \emph{gemm} kernels}
\label{sec:exmvt}

We consider again the \emph{mvt} kernel of Figure~\ref{code:mvt}.  The false
dependences created by the \texttt{t} scalar on the~\texttt{J}~loop
inhibit the compiler from applying tiling, although tiling for
\emph{mvt} is valid.

\begin{figure}[h!tb]
\centering
 \includegraphics[scale=0.25,keepaspectratio=true]
  {./figures/space_of_live_ranges_mvt.pdf}
 \caption{Live ranges for \textmd{\texttt{t}} in \emph{mvt}}
 \label{fig:live_ranges_mvt}
\end{figure}

Figure~\ref{fig:live_ranges_mvt} represents the live ranges of
\texttt{t} for the first iterations.  The figure shows the statements
executed in each iteration ($S_{1}$ and $S_{2}$) and the live ranges
generated by these statements.  The original execution order is as
follows: live range \mycirc{A} is executed first (this is the live
range between $S_{1}$ and $S_{2}$ in the iteration $i=0$, $j=0$), then
live range \mycirc{B} is executed (this is the live range for
iteration $i=0$, $j=1$), then \mycirc{C}, \mycirc{D}, \mycirc{E},
\mycirc{F}, \mycirc{G}, etc.

We notice that each live range is private to one iteration.  Each live
range starts and terminates in the same iteration (no live range spans
more than one iteration).  The order of execution after applying a
$2\times2$ tiling is as follows: \mycirc{A}, \mycirc{E}, \mycirc{B},
\mycirc{F}, \mycirc{C}, \mycirc{G}, \mycirc{D}, \mycirc{H}.  Tiling
changes the order of execution of live ranges, but it does not break
any live range.
%
The false dependences generated by the array access \texttt{x1[i]} 
do not inhibit loop tiling as all of these false dependences
are forward on both levels.

Although tiling is valid in the case of \emph{mvt},
the compiler fails to apply tiling because the classical
tiling test is too restrictive on \texttt{t}.
%
The same reasoning applies to tiling of the \emph{gemm} kernel shown
on Figure~\ref{code:gemm-pre}: tiling is valid because the scalar
\texttt{t} is private to the \texttt{J} iteration.

\subsubsection{An example where tiling is not valid}
\label{sec:exsven}

Consider the example in Figure~\ref{code:exsven}.  Similarly to the
previous examples, the false dependences created by the scalar
\texttt{t} on the $j$ loop prevent the compiler from applying loop
tiling. But is it valid to ignore false dependences in this case?

Figure~\ref{fig:live_ranges_sven} shows the live ranges of
\texttt{t}. The original execution order is: \mycirc{A}, \mycirc{B},
\mycirc{C}, \mycirc{D}, \mycirc{E}, \mycirc{F}, \mycirc{G},
\mycirc{H}. The live range begins in \mycirc{A} (i.e., in the
iteration $i=0$, $j=0$) and remains live until \mycirc{D} (i.e., the
iteration $i=0$, $j=3$).  After tiling, the new execution order is:
\mycirc{A}, \mycirc{E}, etc. This means that the value written in
\mycirc{A} will be overwritten by the write statement in
\mycirc{E} which breaks the live range.

Tiling in this case is not valid because the live ranges are not
private to the $j$ iteration; we should not ignore false dependences
on \texttt{t}.

\begin{figure}[h!tb]
  \begin{cprog}    
for (i = 0; i <= 1; i++)
  for (j = 0; j <= 3; j++) {
    if (j==0)
 S1:  t = 0;
    if (j>0 && j<=2)
 S2:  t = t + 1;
    if (j==3)
 S3:  A[i] = t;
  }
\end{cprog}
\caption{Example where tiling is not possible}
\label{code:exsven}

\medskip
\centering
 \includegraphics[scale=0.25,keepaspectratio=true]
  {./figures/space_of_live_ranges_sven.pdf}
 \caption{Live ranges for the example in Figure~\ref{code:exsven}}
 \label{fig:live_ranges_sven}
\end{figure}

\subsection{Construction of Live Range Classes}
\label{sec:live_range_interval_extraction}

We use read-after-write dependences to construct live range classes.
For this, we use array data-flow analysis described in
\cite{feautrier_dataflow_1991} and implemented in the ISL
library~\cite{verdoolaege_isl:_2010} using parametric integer linear
programming.  Array data-flow analysis answers the following question:
given a value $v$ that is read from a memory cell $m$ at a statement
instance $r$, compute the statement instance $w$ that is the source for
the value $v$.  The result is a (possibly non-convex) affine relation between
write and read statement instances, described by a union of
polyhedra.

This systematic construction needs to be completed with the special
treatment of live-in and live-out array elements. Array data-flow
analysis provides the necessary information for live-in ranges, but
inter-procedural analysis or array regions accessed outside the scope
of the static control part is needed for live-out properties. This is
an orthogonal problem, and for the moment we conservatively assume
that the program is manually annotated with live-out arrays (the
analysis for scalars is not a problem).

\section{Ignoring false dependences}
\label{sec:sufficient_cnd}

We say that a live range class is \emph{iteration-private at level
  $k$} if it begins and ends in the same iteration of a loop at
nesting level $k$.
%
To determine whether a live range class is iteration-private at a given
loop level, we calculate the distance between the source and the
sink of that live range class for that given loop level.
If this distance is zero then the live range class is
iteration-private.

Let $\delta_{S_1 \rightarrow S_2}$ be an affine mapping characterizing a
class of dependences between instances of statements $S_{1}$ and
$S_{2}$. We say that $\delta_{S_1 \rightarrow S_2}$ is \emph{adjacent}
to a live range class $R$ if (instances of) $S_{1}$ or (instances of)
$S_{2}$ are the sink or the source of the live range class.  A false
dependence may be adjacent to a single live range class, in which case
the source and sink of the dependence belong to different live ranges
in the class. A false dependence may also be adjacent to two different
live range classes.

In Figure~\ref{fig:dep_belong},
the dependence $\delta_{S_1 \rightarrow S_1}$
is adjacent to the live range class \mbox{$(S_{1}(I), S_{2}(I'))$}.
The dependence $\delta_{S_2 \rightarrow S_3}$ is adjacent to
\mbox{$(S_{1}(I), S_{2}(I'))$} and
\mbox{$(S_{3}(I), S_{4}(I'))$}.
The dependence $\delta_{S_3 \rightarrow S_3}$ is adjacent to
\mbox{$(S_{3}(I), S_{4}(I'))$}
but is not adjacent to \mbox{$(S_{1}(I), S_{2}(I'))$}.

\begin{figure}[h!tb]
\centering
 \includegraphics[scale=0.25,keepaspectratio=true]
  {./figures/belonging_deps.pdf}
 \caption{An example of false dependences adjacent
 to \mbox{$(S_{1}(I), S_{2}(I'))$}
 and $(S_{3}(J), S_{4}(J'))$}
 \label{fig:dep_belong}
\end{figure}

\subsection{Tiling}

To identify which false dependences should be ignored during tiling
and which ones should not be ignored, we use the following procedure.
%
\begin{itemize}
\item For each live range class, we check if it is private at each level
  of the band.
\item For each false dependence, we consider all adjacent live range classes.
  If all of them have been determined to be private at each level of the band,
  the false dependence can be ignored.
\end{itemize}

Figure~\ref{code:ex_rm_fdep_between_2_live_ranges}
shows why it is not safe to ignore false dependences
that are adjacent to a non-iteration-private live range class.
The kernel has two live range classes
on the scalar \texttt{x}: the first live range class
\mbox{$(S_{1}(I), S_{2}(I))$} is iteration-private and the second
\mbox{$(S_{3}(I), S_{4}(I'))$} is not. A $2\times2$
tiling in this case is not correct because it will break the
\mbox{$(S_{3}(I), S_{4}(I'))$} live range class, hence
the false dependence $\delta_{S_4 \rightarrow S_1}$ should not be
removed.

\begin{figure}[h!tb]
  \begin{cprog}    
for (i = 0; i <= ni; i++)
  for (j = 0; j <= nj; j++)
    if (j % 4 == 0) {
 S1:  x = 0;
 S2:  A[i][j] = x;
 S3:  x = 1;
    } else
 S4:  x++;

\end{cprog}
\caption{\label{code:ex_rm_fdep_between_2_live_ranges} Removing false dependences between two live range classes}
\end{figure}

\subsection{Parallelism}

Besides tiling, we are also interested in the extraction of data
parallelism. Thread-private variables inducing loop-carried false
dependences have to be privatized whenever these loops are being data
parallelized.

For example, scalar variables that carry interfering ranges that live
and die within a single iteration of a parallel loop may be safely
ignored when detecting parallelism, but they must be marked as
thread-private in OpenMP.  This reasoning also applies to array
variables, but private OpenMP does not support the declaration of
thread-private array elements, and we currently only ignore false
dependences involving scalar variables when expressing parallelism.

The need for thread-privatization is much less intrusive than actual
scalar or array expansion involving the generation of new array data
declarations, the rewrite of array subscripts, and the recovery of the
data flow across renamed privatized structures. In addition, making
scalar variable thread-private does \emph{not} result in the
conversion of register operands into memory accesses, unlike the
privatization of a scalar variable along an enclosing loop.

In practice, support for thread-privatization is implemented as a slight
extension to the state-of-the-art violated dependence analysis
\cite{vasilache_violated_2006,konrad_elimination_2011}.
%
Interestingly, the transformation of source code to three address code
does not impact parallelism detection as it only induces intra-block
live ranges.  The next section proves a similar result about the
impact of our technique on the applicability of loop tiling.

\section{On the Tiling Potential of 3AC}
\label{proof_of_equivalence}

This section shows that the transformation of source code to three-address
code does not harm the application of loop tiling. It was obviously a
strong intuition when starting the design and implementation of the
first low-level polyhedral optimization framework, GRAPHITE, in 2006
\cite{pop_graphite:_2006}.

In other words, we show that by ignoring false dependences, whether
tiling is applied before or after three-address lowering, we
always get the same result, which is not the case with the classical
correctness conditions for tiling.

Let us consider a simple statement $S_{1}$:
\\
\begin{cprog}
  for (i1=...; i1<n1; i1++) {
    for (i2=...; i2<n2; i2++) {   
      ...
      for (im=...; im<nm; im++) {
        ... 
   S1:  x = expr_1 + expr_2 + expr_3;
        ...
      }
    }
  }
\end{cprog}

Let $\Delta$ be the set of all dependences induced by $S_{1}$.  After
transforming this statement to 3AC, we get two new statements: $S_{2}$
and $S_{3}$.
\\
\begin{cprog}
  for (i1=...; i1<n1; i1++) {
    for (i2=...; i2<n2; i2++) {   
      ...
      for (im=...; im<nm; im++) {
        ... 
   S2:  x1 = expr_2 + expr_3;
   S3:  x = expr_1 + x1;
        ...
      }
    }
  }
\end{cprog}

Variable \texttt{x1} introduced by the
three-address transformation is a new variable and thus it
does not have any dependence with the other variables
of the program (the variables that were already in
the program before three-address transformation), but it induces three new
dependences between the two newly
introduced statements.  These dependences are:
\begin{itemize}
\item A flow dependence: $\delta_{S_2 \rightarrow S_3}$
 \item A write-after-write dependence : $\delta_{S_2 \rightarrow S_2}$.
 \item A write-after-read dependence : $\delta_{S_3 \rightarrow S_2}$.
\end{itemize}

Before three-address transformation the set of dependences is
$\Delta$.  After three-address transformation the set of dependences
is $\Delta \cup \{ \delta_{S_2 \rightarrow S_3}, \delta_{S_2
  \rightarrow S_2},\delta_{S_3 \rightarrow S_2} \}$.

The flow dependence $\delta_{S_2 \rightarrow S_3}$ constitutes a live
range which is iteration-private (it begins and terminates in the same
iteration) and thus, by applying the technique proposed in this paper,
we can ignore the two false dependences $\delta_{S_2 \rightarrow S_2}$
and $\delta_{S_3 \rightarrow S_2}$ during tiling. The dependences that
remain are $\Delta \cup \{ \delta_{S_2 \rightarrow S_3} \}$.  Since
the flow dependence $\delta_{S_2 \rightarrow S_3}$ is
iteration-private (because the dependence distance of this flow
dependence is zero) then it will have no effect on tiling and thus
applying tiling on the transformed code is exactly equivalent to
applying tiling on the original code (since in both cases the tiling
will be applied on the $\Delta$ set of dependences).

If the right side of a statement has more than 3 expressions (i.e.,
after transformation to 3AC, the statement is split into two
statements or more) and we can apply the same proof by induction on the
other new statements.

%------------------------------------------------------------------------------%
\section{Design and Implementation}
\label{design_implem}

Our technique has been implemented in the Pluto source to source compiler
\cite{bondhugula_practical_2008}.  Although Pluto is a polyhedral
compiler, the technique can be used in any other compiler that
implements some form of array data-flow analysis.
%
The polyhedral model is an algebraic representation and abstraction of
programs for reasoning about loop transformations. It allows to model
and apply complex loop nest transformations addressing most of the
parallelism and locality-enhancing challenges.

Figure~\ref{fig:pluto_steps} details the implementation of the
proposed technique (steps~3 and~4 are the new steps introduced to
the compiler).  After calculating dependences, Pluto applies
the Forward Communication Only (FCO) algorithm
\cite{Gri04b,bondhugula_practical_2008} to build a schedule that
maximizes data locality and outermost loop parallelism (step~2 in the
figure).  This algorithm finds and applies the most appropriate
combination of loop nest transformations, combining loop distribution,
fusion, skewing, shifting, interchange, etc.  FCO also identifies
tilable bands, but does not perform tiling itself.  Tiling
(strip-mining and interchange) is performed in a follow-up pass.

\begin{figure}[h!tb]
\centering
 \includegraphics[scale=0.35,keepaspectratio=true]
  {./figures/pluto-modified.pdf}
 \caption{Implementation of the proposed technique
  in Pluto}
 \label{fig:pluto_steps}
\end{figure}

We did not modify the FCO algorithm. It still takes into account all
false dependences, attempting to convert them into forward dependences
through an affine transformation.  Since some false dependences may
fail to be converted into forward dependences, we use our technique
as a post-pass to recognize outermost permutable bands larger than those
recognized by the standard FCO algorithm. It works as follows:
\begin{itemize}
 \item Step 3 marks false dependences that should be ignored.
 \item Step 4 calculates dependence directions for all loop levels in order to
   identify tilable bands. If a false dependence is ignored for a
   given loop level, its direction for that loop level is set to be
   zero.  A tilable loop band is composed of consecutive loop levels
   with no negative loop dependence.
\item Step 5 applies tiling on permutable loop bands identified by step 4.
\end{itemize}

Although we implemented our technique in Pluto, the method relies on
relaxing the correctness conditions for tiling and any compiler that
implements some form of array data-flow analysis can benefit from it.

\section{Experimental Results}
\label{experiments}

The experiments were performed on a dual-socket AMD Opteron (Magny-Cours) blade
with $2\times12$ cores at 1.7GHz and 16GB of RAM.
%
The baseline is compiled with GCC 4.4, with optimization flags
\texttt{-O3 -ffast-math}.\footnote{This version of GCC performs no
  further loop nest optimization on these benchmarks, but it succeeds
  in automatically vectorizing the generated code in many cases.}  We
use OpenMP as the target of automatic transformations. We compare the
original Pluto implementation with our modified version, reporting the
median of the speedups obtained after 30 runs of each benchmark.

Our technique was tested on the Polybench suite version 2.0 with big
datasets.\footnote{\scriptsize\url{http://www.cse.ohio-state.edu/~pouchet/software/polybench}}
To stress the method on realistic false dependences, scalar variables
were intentionally introduced in each one of the benchmark kernels.
These scalar variables were introduced either by transforming the
source program into Three-Address Code (3AC) or by applying Partial
Redundancy Elimination (PRE) to increase the number of false
dependences.  Both transformations (3AC and PRE) have been
manually applied to Polybench. Note that Pluto is a source-to-source
compiler and that it does not convert the code to three address form
internally, nor does it privatize or rename variables to eliminate
false dependences.

Our modified implementation handles the complete Polybench suite and shows
excellent results. On the other hand, the original Pluto systematically fails on
the modified benchmarks with extra scalar
variables. Figures~\ref{fig:performance_numbers_3AC}
and~\ref{fig:performance_numbers_PRE} compare the effect of optimizing
the code with and without the
\lstinline{--ignore-false-dependences} option.  PRE is only effective for some
loop nests, which explains why
Figure~\ref{fig:performance_numbers_PRE} does not show all Polybench
kernels.

\begin{figure*}[h!tb]
  \hbox{\hskip-4mm\includegraphics[scale=0.47]{../test-pdf/ignore-false-deps-for-tiling-tac.pdf}\hskip-4mm}
  \caption{Speedup against non-tiled sequential code for PolyBench --- 3AC}
  \label{fig:performance_numbers_3AC}
\end{figure*}

Three classes of kernels can be identified on
Figure~\ref{fig:performance_numbers_3AC}:
%
\begin{enumerate}
\item The first and largest class is best represented by
  benchmarks \emph{gemm}, \emph{gesummv}, \emph{syrk}, \emph{syr2k},
  and \emph{covariance}. The \lstinline{--ignore-false-dependences}
  option, applied to 3AC, restores the full tiling potential of Pluto
  and speedup reaches $96\times$ on \emph{gemm}.  This class shows
  that applying tiling using the proposed technique on a code that is
  in three-address form is equivalent to applying tiling on a code
  that is not in three-address form.
\item The second class is represented by \emph{2mm},
  \emph{3mm}, \emph{gemver}, \emph{mvt}, \emph{lu}, \emph{seidel},
  \emph{jacobi-1d}, \emph{jacobi-2d}, \emph{atax}, \emph{bicg}, and
  \emph{trisolv}. It shows a difference in performance between codes in
  three-address form and codes not in three-address form.  This
  difference is not due to tiling but to a different selection of
  enabling loop transformations in the FCO algorithm.  The original
  code (not in three-address form) has very few false dependences
  which gives much freedom for the FCO algorithm to perform profitable
  affine transformations. In \emph{gemver} for example, the FCO
  algorithm keeps loops fused together when the code is not in
  three-address form, whereas it performs loop distribution (due to
  false dependences) for \emph{gemver} in three-address form.  Loop
  distribution happens to negatively impact performance due to missed
  temporal reuse and due to inner loop parallelism.  Let us examine in
  detail another example: \emph{jacobi-2d}.  To be able to extract
  outermost parallelism in \emph{jacobi-2d}, the FCO algorithm has to
  perform loop skewing.  Due to false dependences in three-address
  code FCO cannot apply skewing and thus only innermost loops can be
  parallelized leading to a loss in performance.  This class of
  benchmarks motivates future work towards the integration of
  non-interference constraints into the FCO algorithm itself.
\item In the third class, represented by \emph{symm} and
  \emph{fdtd-2d}, and \emph{cholesky},
  the original code itself contains scalar variables, and thus
  Pluto fails to find tilable loop bands.  Ignoring false
  dependences enables Pluto to find tilable bands, resulting
  in performance improvements.
\end{enumerate}

Similarly, Figure~\ref{fig:performance_numbers_PRE} shows that the use of
the option \lstinline{--ignore-false-dependences} restores Pluto's ability
to identify tilable bands on most benchmarks despite the application
of PRE.  In most of these benchmarks, the original loop depth is
greater than 2, and after applying PRE on statements in the innermost
loop, the outermost two loops remain free of false dependences.  This
is not the case in \emph{gemver} and \emph{gesummv} however, where the
original loop level number is exactly 2 and where PRE optimization
introduces a temporary scalar variable to hold invariant data across
the iterations of the inner loop; this variable induces false
dependences preventing Pluto from applying tiling. Note that PRE also
makes the loops imperfectly nested in the latter case, but this does
not in itself impact the ability of Pluto to tile the loops.

For \emph{seidel}, \emph{lu}, \emph{atax}, \emph{bicg},
\emph{cholesky}, \emph{jacobi-1d}, \emph{jacobi-2d}, and
\emph{trisolv}, although tiling was applied, the parallelization
overhead is very high.  This is due to a lack of adequate
profitability models in the current implementation of Pluto, which is
orthogonal to the contribution of this paper.

\begin{figure}[h!tb]
  \hbox{\hskip-3mm\includegraphics[scale=0.55]{../test-pdf/ignore-false-deps-for-tiling-pre.pdf}\hskip-3mm}
  \caption{Speedup against non-tiled sequential code for PolyBench --- PRE}
  \label{fig:performance_numbers_PRE}
\end{figure}

\section{Conclusion and Future Work}

Loop tiling is one of the most profitable loop nest transformations.
Due to its wide applicability, any enhancement on tiling will impact a
wide range of programs.  The proposed technique allows the compiler to
ignore false dependences between iteration-private live ranges,
allowing the compiler to discover larger bands of tilable loops.

Using our technique, loop tiling can be applied without any expansion
or privatization, apart from the necessary marking of thread-private
data for parallelization. The footprint on memory is minimized, and
the overheads of array privatization are avoided.

We have shown that ignoring false dependences for iteration-private
live ranges is particularly effective to enable tiling on
three-address code, or after applying scalar optimizations such as
partial redundancy elimination and loop invariant code motion.

We are investigating how to integrate non-interference constraints
into the FCO algorithm itself, to avoid the performance degradation
observed when converting a few benchmarks to three-address code.  We
are also interested in combining this technique with on-demand array
expansion, to enable the maximal extraction of tilable parallel loops
with a minimum of memory footprint.

%%----------------------------------------------------------------------%%
%END of Doc

\bibliographystyle{abbrv}
\bibliography{bibliography}




\end{document}
