\section*{Synthesis}
Multi-core processors are now in widespread use
in almost all areas of computing: desktops, 
laptops and accelerators such as GPGPUs. 
To harness the power of multiple cores and complex memory hierarchies,
the need for powerful compiler optimizations and especially 
loop nest transformations are now again in high demand. 

Among optimization frameworks, the polyhedral framework is 
showing an interesting success. It's an algebraic abstraction for 
reasoning about loop transformations. It allows to model,
construct and apply complex loop nest transformations
addressing most of the parallelism adaptation and mapping challenges.

To preserve the semantics of the original program, data-dependences
need to be analyzed. Two types of data dependences exist, 
data-flow (true) and memory-based (false) dependences. Memory-based 
dependences are induced by the reuse of the same temporary 
variable.
These memory-based dependences not only increase the total number of dependences, 
increasing the total compilation time, but most importantly, they reduce 
the degree of freedom available to express effective loop nest transformations.

\paragraph{Problem}
it is a well known fact that memory-based dependences
reduce the opportunities for automatic parallelization
and other loop transformations. To deal with memory-based dependences, 
many previous works proposed to use
expansion and privatization. 

Memory expansion is used to assign a separate
memory location for each executed iteration. 
Privatization is used to assign a private copy
to each thread executing in parallel.
A maximal expansion will enable a maximal 
degree of freedom for parallelization and loop transformations, but will have
a huge memory footprint.

Our goal in this work is to enable 
more loop nest transformations in kernels that contain memory-based 
dependences without using
privatization and expansion. All of the kernels that we studied
cannot be optimized with other state of the art polyhedral frameworks.

\paragraph{Proposed solution} 
to enable loop nest transformations, we propose to ignore memory-based
dependences, and use \emph{live range non-interference} constraints
to guarantee the correctness of transformations.

By ignoring memory-based dependences, the compiler has more freedom to
find loop nest transformations such as loop interchange, loop blocking
and outermost parallelization.
Using our technique, loop transformations can be applied without any expansion 
or privatization, privatization and expansion may only be needed to parallelize 
the loop and thus footprint on memory is minimized.

\begin{figure*}[ht]
\begin{minipage}[b]{0.5\textwidth}
  \begin{cprog}
for (i = 0; i < N; i++)
  {
   s1:      t = ...
   s2:       ... = t
 
   s3:       t = ...           
   s4:       ... = t
  }
  \end{cprog}
\end{minipage}
\hspace{1cm}
\begin{minipage}[b]{0.5\textwidth}
   \includegraphics[bb=0 0 412 300, scale=0.35,keepaspectratio=true]
   {./figures/example_non_interference_interval_with_interval.pdf}   
\end{minipage}
\caption{An example of non-interference of live range intervals 
\label{fig:sythesis_example_of_non_interference}}
\end{figure*}

\paragraph{Example of Non-Interference Constraints} 
there are two live range
intervals for the scalar \emph{t} in Figure 
\ref{fig:sythesis_example_of_non_interference}.
To guarantee the correctness of loop transformations,
we check that the two intervals do not interfere, i.e.
the first write on the scalar t (s1) is not killed with
the second write (s3). Changing the schedule of the 
statements is possible if the non-interference condition
is preserved.

\paragraph{Global View of the Proposed Technique}

\begin{enumerate}
      \item Perform a dependence analysis. Two kinds of dependences are found: 
      \begin{itemize}
	  \item Data-flow dependences.
	  \item Memory-based dependences.
      \end{itemize}
      \item Remove memory-based dependences.
      \item Create non-interference constraints (using the formal definition of live-range non-interference constraint).
      \item Apply loop nest transformations.
  \end{enumerate}

\paragraph{Implementation and Experiments}
the proposed technique was implemented in \emph{Pluto},
a source-to-source polyhedral compiler and was evaluated 
on the PolyBench suite.

While state-of-the-art polyhedral compilers failed to apply loop transformations 
on kernels with scalars. Our technique showed up to $4 \times$ 
speedup on those kernels where advanced loop nest transformations,
such as loop interchange, loop blocking and outermost parallelization 
were applied.

\paragraph{Other Contributions}
two optimizations were proposed:
\begin{itemize}
 \item Live range merging: which reduces the total number of live range intervals
by merging consecutive live ranges and thus reduces the total number of constraints,
and enhances the overall compilation time.
 \item Using false dependences to force interval non-interference: reduces the
number of constraints without a significant loss in performance.
\end{itemize}

By applying these optimizations, we could reduce the
number of constraints while having a maximum of freedom for loop 
transformations.

\paragraph{Conclusion} we have shown that using live range non-interference constraints
is an effective technique to enable transformations on kernels with scalar
variables. Our goal now is to
implement the proposed technique in \emph{Graphite} (the polyhedral framework in 
\emph{GCC}) and to focus on enhancing 
the scalability of the technique to address more complex codes.
