% llncs : version 2.4 for LaTeX2e as of 16. April 2010
\documentclass{llncs}
%
%\usepackage{makeidx}  % allows for indexgeneration
\input{source/includes}

\let\ctextfont=\tt

\newcommand{\LB}{\textrm{\L}}

%
\begin{document}
%
\frontmatter          % for the preliminaries
\pagestyle{headings}  % switches on printing of running heads
\mainmatter              % start of the contributions
%
\title{Ignoring False Dependences To Enable Tiling}
%
%
\author{Riyadh Baghdadi  
   \and Albert Cohen
   \and Sven Verdoolaege
   \and Konrad Trifunovi\'c}

\institute{INRIA and \'Ecole Normale Sup\'erieure \\
\email{\textit{first}.\textit{last}@inria.fr}}

\maketitle{}

\begin{abstract}
\input{source/abstract}
\end{abstract}

\section{Introduction and Related Work}

Multi-core processors are now in widespread use in almost all areas of
computing: desktops, laptops and accelerators such as GPGPUs.  To
harness the power of multiple cores and complex memory hierarchies,
the need for powerful compiler optimizations and especially loop nest
transformations are in high demand.

To preserve the semantics of the original program, loop
transformations operate on the fine-grain schedule of the statement
instances (iterations) executed in a loop nest. Data-dependences among
these statement instances need to be preserved.  Two types of data
dependences exist, true (a.k.a.\ data-flow) and false (a.k.a.\
memory-based) dependences \cite{kennedy_optimizing_2002}.

False dependences are induced by the reuse of temporary variables
for many times.  These false dependences not only increase the total
number of dependences, increasing the complexity of the optimization
problem, but most importantly, they reduce the degree of freedom
available to express effective loop nest transformations.

Scalar and array expansion techniques have been proposed to deal with
false dependences 
\cite{feautrier_array_1988}. Renaming and privatization are the two
main classes of expansion techniques to remove false
dependences \cite{kennedy_optimizing_2002}.  The main limitation
for expansion comes from the negative impact on cache
locality and memory footprint \cite{thies_unified_2001}. Scalar
expansion is particularly harmful as it converts register arguments
into memory operations.

Array contraction methods have been proposed
\cite{lefebvre_automatic_1998,Qui00} to reduce the memory footprint
without constraining loop nest transformations: the compiler performs
a maximal expansion, looks for transformations, and then attempt to
contract the arrays. By performing a maximal expansion, false
dependences are eliminated. The problem is that contraction is not
always possible when unrestricted loop transformations have been
applied.

Several alternative approaches have been proposed to constrain the
expansion a priori. In presence of dynamic control flow, maximal
static expansion restricts the elimination of dependences to the
situations where the data flow can be captured accurately at
compilation time \cite{cohen_optimization_1998}; it can also be
combined with array contraction \cite{cohen_parallelization_1999}. A
priori constraints on memory footprint can also be enforced, up to
linear volume approximations \cite{thies_unified_2001}, and more
generally, trade-offs between parallelism and storage allocation can be
explored. These approaches are particularly interesting when adapting
loop nests for execution on hardware accelerators and embedded
processors with local memories.

Even in source programs that do not initially contain false
dependences (dynamic single assignment programs), compiler passes may
introduce false dependences in upstream transformations. This
is a practical compiler construction issue, and a high priority for
the effectiveness of loop optimization frameworks implemented in 
production compilers \cite{konrad_elimination_2011}.  Among upstream compiler
passes generating false dependences, we will concentrate on the
most critical ones, as identified by ongoing development on the
GRAPHITE polyhedral optimization framework in 
GCC \cite{konrad_elimination_2011}:
\begin{itemize}
\item Transformation to Three-Address Code (3AC). GRAPHITE
  is an example of a polyhedral loop optimization framework that operates
  on low level three-address instructions, and that is affected by the false
  dependences that appear due to transformation to three-address code
  at this level.
\item Partial Redundancy Elimination (PRE) \cite{Kno94} applied to
  array operations removes invariant loads and stores, promoting array
  accesses into scalars.  Figures~\ref{code:gemm}
  and~\ref{code:gemm-pre} show an example of this optimization.
\end{itemize}

\begin{figure}[h!tb]
\begin{minipage}[b]{0.5\textwidth}
  \begin{cprog}
for (i = 0; i < ni; i++)
  for (j = 0; j < nj; j++) {
    s1: C[i][j] = C[i][j] * beta;
        for (k = 0; k < nk; ++k)
    s2:   C[i][j]+= alpha *
                    A[i][k]*B[k][j];      

  }
  \end{cprog}
  \caption{\label{code:gemm}\emph{Gemm} kernel}
\end{minipage}
\begin{minipage}[b]{0.5\textwidth}
   \begin{cprog}
for (i = 0; i < ni; i++)
  for (j = 0; j < nj; j++) {
    s1:  t = C[i][j] * beta;      
         for (k = 0; k < nk; ++k)
    s2:    t+= alpha * 
               A[i][k] * B[k][j];
    s3:  C[i][j] = t;
  }
  \end{cprog}
  \caption{\label{code:gemm-pre}\emph{Gemm} kernel after applying PRE}
\end{minipage}
\end{figure}

Our goal is to propose and evaluate a technique allowing a compiler
to ignore false dependences so that tiling is enabled.
Since ignoring false dependences to enable tiling is not
always correct, we use a sufficient condition based on the concept
of live range non-interference to
guarantee the correctness of tiling.  The technique is implemented
on the Pluto source to source compiler
\cite{bondhugula_practical_2008}.
In the following sections, we show the classical tiling legality test
(the test that allows a compiler to decide whether it's correct
to apply tiling or not) and
show cases where this test is too retrictive and where it inhibits
legal tiling, and then we propose a new tiling test that avoids this
problem and allows the tiling of kernels with false dependences.

\section{Motivating Example}

\begin{figure}[h!tb]
 \centering
 \includegraphics[bb=0 0 689 453,scale=0.30,keepaspectratio=true]{./figures/deps/gemm_deps.pdf}
 \caption{Dependences in \emph{Gemm} kernel}
 \label{fig:gemm_deps}
\end{figure}

Figure~\ref{fig:gemm_deps} shows data dependences on the scalar
\texttt{t} in \emph{Gemm} kernel (Figure~\ref{code:gemm-pre}).  It
contains both, data-flow (true dependences) and false dependences 
(a.k.a. write after write and write after read dependences).  
The figure shows a
simplified version of the dependences between statements in two
iterations \texttt{j} and \texttt{j+1}.  Many false dependences
stem from the fact that the same temporary scalar value is overwritten
in each iteration of the loop.  These dependences enforce a sequential
execution.  Applying loop blocking (tiling) would be an excellent
optimization to enhance data locality in the case of \emph{Gemm}. To
apply this optimization a compiler needs to interchange the iterators
\texttt{i} and \texttt{j}.

To apply tiling, the compiler needs to check whether dependences
on the outermost loop levels are forward. In the case of \emph{Gemm}
due to false dependences on the \texttt{t} scalar, the classical
tiling legality test does not allow tiling to be applied, although tiling is legal
as we show in Figure~\ref{code:gemm-tiled} because 
the scalar \texttt{t}
is written and consumed in the same iteration. The live range of \texttt{t}
is private to the \emph{J} iteration.

\begin{figure}[h!tb]
  \begin{cprog}    
for (t1=0; t1 <= (ni-1)/32; t1++) {
  for (t2=0; t2 <= (nj-1)/32;t2++) {
    for (t3=32*t1; t3 <= min(ni-1,32*t1+31);t3++) {
      for (t4=32*t2; t4 <= min(nj-1,32*t2+31);t4++) {
       s1:  t = C[t3][t4] * beta;
            for (t6=0; t6 < nk;t6++) {
       s2:    t=t+alpha*A[t3][t6]*B[t6][t4];
            }
       s3:  C[t3][t4]=t;
       }
     }
  }
}
\end{cprog}
\caption{\label{code:gemm-tiled}Tiled version of the \emph{Gemm} kernel (simplified version)}
\end{figure}

This pattern is very recurrent, especially if tiling is applied on 
low-level three-address code (this is the case for example in the \emph{GRAPHITE} 
\cite{pop_graphite:_2006,trifunovic_graphite_2010}, a polyhedral optimization
pass in GCC). Our goal is to relax this limitation and enable the compiler
to perform tiling on such codes regardless of the presence of false dependences.
This is performed by ignoring false dependences. The goal of the paper is to show
when is it safe to ignore false dependences. No privatization of scalars or
arrays into higher dimensional arrays, and no array renaming are needed.
In the next section, we introduce the concept of live range non-interference.
We use this concept later to specify when is it legal to ignore false
dependences.

\section{Live Range Non-Interference}
In a program, a memory location is read and written by many
statements. A value written in a memory location is called to be alive
until it is destroyed by a subsequent write to the same memory
location.

The statement that wrote the value marks the beginning of a \emph{live
  range interval}, any read to that memory location marks the end of
the \emph{live range}.  A write statement may be a part of
multiple live ranges if it's read by multiple statements.

\subsection{Basic Notation}
\begin{itemize}
  \item$S_{k}$: is used to denote the statement $S_{k}$.
  \item $(<S_{k}, \vec{I}>, <S_{k'}, \vec{I'}>)$: defines a
    live range class. It represents all the live ranges that begin 
    with the write statement $S_{k}$ and 
    terminate with the read statement $S_{k'}$ defined for all loop iterations.  
    $\vec{I}$ and $\vec{I'}$ are
    iteration vectors.
  \item{A Live range} is one instance of the live range class ( a
    live range for a given iteration).
\end{itemize}

Here is an example of a live range class for the scalar \texttt{t} 
in the \emph{Mvt} kernel (Figure~\ref{code:mvt})~:~$$(<S_{1}, \begin{pmatrix}
				    i \\
				    j \\
	  \end{pmatrix}>,
<S_{2}, \begin{pmatrix}
				    i \\
				    j \\			    
	  \end{pmatrix}>)
~~s.t.~0<=i<n,~0<=j<n $$

% !!xx!!  insert a horizontal line.

\begin{figure}[h!tb]  
  \begin{cprog}    
    for (i = 0; i < n; i++)
      for (j = 0; j < n; j++)
        {
        s1: t = A[i][j] * y_1[j];
        s2: x1[i] = x1[i] + t;
        }
\end{cprog}
\caption{\label{code:mvt} One loop from the \emph{Mvt} kernel}
\end{figure}

For each iteration of i, for each iteration of j, 
the previous live range class begins with the write statement $S_{1}$
and terminates with the read statement $S_{2}$.
One instance of these live ranges is the following live 
range~:~$$(<S_{1}, \begin{pmatrix}
				    0 \\
				    0 \\
	  \end{pmatrix}>,
<S_{2}, \begin{pmatrix}
				    0 \\
				    0 \\			    
	  \end{pmatrix}>)
$$

It begins in the iteration i=0, j=0 and terminates in 
the same iteration.

\subsection{Live Range Non-Interference}

Any loop transformation is correct if it does not
lead to live range interference~\cite{pouchet_iterative_2010,vasilache_scalable_2007,konrad_elimination_2011,trifunovic_graphite_2010}.
i.e. if the \emph{liveness} condition is
guaranteed: no intervening write $S_{w}$ is
happening within a live range.

To make sure that live ranges do not interfere it's
sufficient to verify that no write $S_{w}$ happens
within any live range class $(<S_{k}, \vec{I}>, <S_{k'}, \vec{I'}>)$
by verifying that $S_{w}$ is scheduled either before $S_{k}$ 
or after $S_{k'}$, but not between the two statements.

In general, to guarantee that a pair of live ranges 
do not interfere, one can verify that the first live range 
is scheduled before or after the second live range.

% !!xx!! Split this image on two images. And call them A, and B.
Figure~\ref{fig:example_non_interference_interval_with_interval} shows
two examples where a pair of live ranges do not interfere and other examples
where the pair of live ranges interfere.

Our goal is to be able to ignore false dependences so
that we can perform a legal tiling.  We consider that it's
safe to ignore false dependences and apply tiling 
if the non-interference of live ranges is assured. 

\begin{figure}[h!]
\centering
 \includegraphics[scale=0.30,keepaspectratio=true]
  {./figures/example_non_interference_interval_with_interval.pdf}
 \caption{Possible schedules for two live ranges}
 \label{fig:example_non_interference_interval_with_interval}
\end{figure}


\section{More examples}
Our goal is to find a sufficient condition that we can use to guarantee
that it's correct to do tiling while ignoring some false dependences.
In this section we show two other examples to illustrate when is it legal
to ignore false dependences and when it is not.
  So how can we extract a sufficient condition to guarantee that it's
safe to ignore false dependences and to perform tiling? This is
explained through the following notes:

\begin{itemize}
\item A transformation is legal if it guarantees that no live range
interfere during the transformation.  Tiling is legal if live ranges
are not broken.
 \item Tiling is composed of two basic transformations : loop strip-mining
and loop interchange.
    \subitem Loop strip mining is always legal because it
does not change the schedule of statements (execution
order) and thus it can be applied unconditionally.
    \subitem Loop interchange changes the order of iterations,
and thus it may break live ranges.  But a deeper look shows that it never
changes the order of statements within the loop body (within one iteration).
 \item If a lives range class is local to one iteration (i.e. the live range
begins and terminates in the same iteration), changing the order of iterations
preserves the liveness of live ranges. No live range will interfere with 
the other live ranges.  This will be explained better on the next examples.
\item False dependences are used to inhibit live ranges from interfering.
If a live range class is private to one iteration, it's safe to ignore false
dependences on that live range class.  Because we know that after tiling live
ranges will not interfere.
\end{itemize}

\subsection{Example of the \emph{Mvt} kernel \label{sec:exmvt}}
The first example is shown in Figure~\ref{code:mvt-2}. The false dependences
created by the \texttt{t1} scalar on the~\texttt{J}~loop inhibits the compiler
to apply tiling, although tiling for \emph{Mvt} is legal. Applying tiling only
when dependences are forward is a very restrictive test.

Figure \ref{fig:live_ranges_mvt} represents all the live ranges of
the \emph{t1} scalar in the \emph{mvt} kernel.  The x axis represents
the \emph{J} iterations.  The y axis represents the \emph{I} iterations.
The figure represents the statements executed in each iteration
($S_{1}$ and $S_{2}$).  The first live
range is executed in the iteration i=0, j=0, the second live range is
executed in the iteration i=0, j=1, then the third live range in i=0, j=2 ...

We notice in the case of \emph{Mvt} that each live range is private to
one iteration.  No live range spans on more than one iteration.  And thus tiling 
is safe. The only effect of tiling is that it changes the order of
execution of live ranges, which is correct in this case (for example 
executing the live range of i=0, j=1 before the live range of i=0,
j=0 is correct).

The false dependences generated by the array access \texttt{x1[i]} 
does not inhibit loop tiling as all of the false dependences are forward.

\begin{figure}[h!tb]
  \begin{cprog}    
for (i = 0; i <= 2; i++)
  for (j = 0; j <= 3; j++)
    {
    s1: t1 = A[i][j] * y_1[j];
    s2: x1[i] = x1[i] + t1;
    }
\end{cprog}
\caption{\label{code:mvt-2} One loop from the \emph{Mvt} kernel}
\end{figure}

\begin{figure}[h!]
\centering
 \includegraphics[scale=0.25,keepaspectratio=true]
  {./figures/space_of_live_ranges_mvt.pdf}
 \caption{Live ranges for the \emph{t1} scalar in \emph{Mvt}}
 \label{fig:live_ranges_mvt}
\end{figure}

\subsection{Example 2 \label{sec:exsven}}
The second example is shown in Figure~\ref{code:exsven}. The false dependences 
created by the \texttt{t} scalar on the \texttt{J} loop prevent the compiler
from applying tiling. Since live ranges are not private to one iteration,
false dependences on the \emph{t} scalar can not be ignored.

Figure \ref{fig:live_ranges_sven} shows the live ranges of the \emph{t}
scalar. The first live range begins in the iteration i=0, j=0 and remains live
until the end of the J iteration (i.e. iteration i=0, j=3) \footnote{We consider
that a live range begins with a first write on a variable and finishes with
the latest read. Intermediate write/read operations are not considered. We use
a merging algorithm to construct this kind of live ranges.  Live ranges
that have a common statement and that are defined on the same iteration
domain are merged together to create bigger live ranges}. Changing the
order of iterations in the case of Figure~\ref{code:exsven} will break
the live ranges.

\begin{figure}[h!]
  \begin{cprog}    
for (i = 0; i < n; i++)
  for (j = 0; j < n; j++) {
        if (j==0)
           s1: t = 0;

        if (j>0 && j<n-1)
           s2: t = t + 1;

        if (j==n-1)
           s3: A[i] = t;
    }
\end{cprog}
\caption{\label{code:exsven} An example where tiling is not possible}
\end{figure}

\begin{figure}[h!tb]
\centering
 \includegraphics[scale=0.25,keepaspectratio=true]
  {./figures/space_of_live_ranges_sven.pdf}
 \caption{Live range intervals for Example 2}
 \label{fig:live_ranges_sven}
\end{figure}


\section{A sufficient condition to ignore false depedences \label{sec:sufficient_cnd}}
A live range is private to an iteration in a
loop nest if it begins and terminates in the same iteration.

To check whether we can ignore false dependences we use the following test : 
for each loop dimension (loop level), we calculate the distance between 
the live range source (the write) and the live range sink (the read). 
If the distance for that loop dimension is zero then the source and the
sink are in the same iteration and thus we ignore, during tiling, the 
false dependences that belong to that live range for that given dimension.

We say that a dependence $\delta_{s1 \rightarrow s2}$ (a dependence between 
the statement $S_{1}$ and $S_{2}$) belongs to a live range class if the 
source and the sink of the dependence are a part of the same live range class.
In Figure~\ref{fig:dep_belong}, the dependence $\delta_{s1 \rightarrow s1}$ 
belongs to the live range class $(<S_{1}, \vec{I}>, <S_{2}, \vec{I'}>)$ whereas 
the dependence $\delta_{s2 \rightarrow s3}$ does not belong to that live 
range class, because the source and the sink of the dependence are from
two different live range classes.

\begin{figure}[h!]
\centering
 \includegraphics[scale=0.25,keepaspectratio=true]
  {./figures/belonging_deps.pdf}
 \caption{Depedence belonging to a live range class}
 \label{fig:dep_belong}
\end{figure}

Figure~\ref{algo-tiling} describes how does the proposed tiling test work.

\begin{figure}[h!tb]
  \begin{cprog}
Foreach live range class
  Foreach loop dimension
    1- Calculate the distance between the source and the
       sink of the live range class, in that loop dimension.
    2- If the distance is zero, then all false dependences that belong
       to that live range class are ignored during tiling.
\end{cprog}
\caption{An algorithm to identify which false dependences should be ignored}
\label{algo-tiling}
\end{figure}

\subsection{\label{sec:live_range_interval_extraction} 
  Construction of Live Range Intervals}
In this section we show how we construct live range classes for a given
memory location $m$.  Each live range class is a tuple composed of a
write statement followed by a read statement, both accessing the same
memory location $m$.

We use read-after-write dependences to construct live range classes.
  For this, we use array data-flow analysis described
in~\cite{feautrier_dataflow_1991} and implemented
in the ISL library~\cite{verdoolaege_isl:_2010}.  Array data-flow
analysis answers the following question: given a value $v$ that is
read from a memory cell $m$, compute the
instruction instance $w$ that is a source for the
value $v$.  Array data-flow analysis considers only read-after-write
(true) data dependences. The result is a (non-convex) dependence
relation between write/read statement instances.  Each dependence relation
is described by a polyhedron (this polyhedron defines in which iterations
the dependence exists). We use this polyhedron to define the live range
interval domain. This domain indicates on which iterations is the
live range defined (for example a live range may be defined only for
some iterations).

This systematic construction needs to be completed with the special
treatment of live-in and live-out array elements. Array data-flow
analysis provide the necessary information for live-in intervals, but
inter-procedural analysis or array regions accessed outside the scope
of the static control part is needed for live-out properties. This is
an orthogonal problem, and for the moment we conservatively assume
that the program is manually annotated with live-out arrays (the
analysis for scalars is not a problem).

\subsection{Correctness of the Parallelism Detection Step}
Tiling does not need any expansion or privatization.  But if in addition to
tiling parallelization is also needed. Thread-private variables have to
be privatized. This is necessary only for parallelization but not for tiling
correctness.

Scalar variables that carry interfering intervals that live and die within a
single iteration of a parallel loop may be safely ignored when detecting
parallelism, but they must be marked as thread-private in OpenMP.
This combination results in a slight modification of the state-of-the-art
violated dependence analysis method
\cite{vasilache_violated_2006,konrad_elimination_2011}.

Note that the 3AC transformation does not impact parallelism
detection, as it only induces intra-block live range
intervals.  This paper shows that the 3AC transformation does not harm the
application of state-of-the-art polyhedral compilation algorithms. It
was obviously a strong intution when designing low-level polyhedral
frameworks like GRAPHITE.

%------------------------------------------------------------------------------%
\section{Implementation}

The proposed technique was
implemented in the \emph{Pluto} source to source compiler
\cite{bondhugula_practical_2008}.  Although \emph{Pluto} is a polyhedral based
compiler, the technique is usable in any other compiler.

In \emph{Pluto} tiling and parallelization detection happen after the 
FCO algorithm \cite{Gri04b,bondhugula_practical_2008} (an algorithm used to
find a schedule maximizing data locality and outermost loop parallelism).
The FCO algorithm can identify tilable bands (a tilable band is a set of
loop dimensions with at least two outer loops with
no negative dependence distance), but we don't use the results of FCO
for two reasons : first, because the FCO scheduling is performed without ignoring
false dependences and second because we want to have a compiler independent
tiling technique.  For these two reasons we implemented a new pass that
applies the new tiling test as described in section~\ref{sec:sufficient_cnd}.
  The goal of this pass is to identify tilable and parallelizable bands.

\section{Experimental Results}

% !!xx!! REVIEW : Why did you need to
% perform the manual transformations when the compiler is supposed
% to do them downstream? Is that a current artifact or a real limitation?

Our experiments target a dual-socket AMD Opteron (Magny-Cours) blade
with $2\times12$ cores at 1.7GHz and 16GB of RAM.

We use OpenMP as the target of automatic transformations. Baseline and
optimized codes were compiled first with the modified version of
\textit{Pluto}.  The baseline and generated OpenMP codes are compiled
with GCC 4.4, with \texttt{-O3 -ffast-math} optimization\footnote{GCC
  performs no further loop nest optimization on these benchmarks.}.  We
report the median of the speedups obtained after 30 runs of each
benchmark.

Our technique was tested on the
PolyBench\footnote{\url{http://www.cse.ohio-state.edu/~pouchet/software/polybench}}
suite with big datasets.
  To stress the method on realistic false
dependences, scalar variables were intentionally introduced in each one
of the benchmark kernels.  These scalar variables were introduced either
by transforming the source program into Three-Address Code (3AC) or by applying
Partial Redundancy Elimination (PRE) to increase the number of false dependences.
Both transformations are performed manually to stress the test since they are
not performed automatically by \emph{Pluto}.

\begin{figure*}[h!]
 \begin{minipage}[b]{1\textwidth}
  \centering
  \includegraphics[scale=0.45] {../test-pdf/ignore-false-deps-for-tiling-tac.pdf}
  \caption{Speedup for the PolyBench benchmark (3AC)}
  \label{fig:performance_numbers_3AC}
 \end{minipage}
 \begin{minipage}[b]{1\textwidth}
  \centering
  \includegraphics[scale=0.40] {../test-pdf/ignore-false-deps-for-tiling-pre.pdf}
  \caption{Speedup for the PolyBench benchmark (PRE)}
  \label{fig:performance_numbers_PRE}
 \end{minipage}
\end{figure*}

Figures~\ref{fig:performance_numbers_3AC}
and~\ref{fig:performance_numbers_PRE} compare the effect of optimizing the code
with and without the option
\emph{``--ignore-false-dependences''}.
Applying PRE is possible only for some kernels, which explains why
Figure~\ref{fig:performance_numbers_PRE} doesn't show all the Polybench
kernels.

The test shows also that the classical tiling test is too restrictive to allow
tiling although tiling is legal.  By ignoring false dependences,
tiling was enabled in many kernels (\emph{2mm, 3mm, gemm}...) and speedups
reached $54\times$ for \emph{Gemm}.

Even in kernels such as \emph{atax, bicg, trisolv}, tiling was applied.
  But in this case the cost of parallelization and the overhead
of tiling were high. This loss in performance is not due to the usage of
our technique but to the lake models that predict the efficiency of tiling.

The current implementation passes all of the PolyBench suite and shows very
encouraging results.

\section{Conclusion and Future Work}
Loop nest tiling is one of the most profitable loop nest transformations.
Due to its wide applicability, we believe that any enhancement on tiling
will impact a wide range of benchmarks.
The proposed technique enables the compiler to ignoring false dependences
for iteration-private live ranges allowing the compiler to discover
more tilable loop bands.

Using our technique, loop nest tiling can be applied without any
expansion or privatization, apart from the necessary marking of
thread-private data for parallelization and thus the footprint on memory
is minimized.  The resulting speedups are very close to the optimal obtained
when all false dependences are ignored.

\input{source/future_work}

%%----------------------------------------------------------------------%%
%END of Doc

\bibliographystyle{abbrv}
\bibliography{bibliography}

\end{document}
