\documentclass{article}

\oddsidemargin  0in
\evensidemargin 0in 
\textwidth      6.5in
\topmargin      0.0in
\textheight     9.0in
\headheight     0.0in
\headsep        0.0in

\usepackage{fancybox}
%\usepackage{pstricks}
\usepackage{amsfonts}
\usepackage[bookmarks=true,
            bookmarksnumbered=true,
            bookmarksopen=false,
            plainpages=false,
            pdfpagelabels,
            colorlinks,
            citecolor=black,              % color of cite links
            pagecolor=black,         % color of page links
            linkcolor=black,         % color of hyperref links
            menucolor=black,         % color of Acrobat Reader menu buttons
            urlcolor=black,       % color of page of \url{...}
            linkcolor=black]{hyperref}
\usepackage{graphicx}
\usepackage{algorithm}
\usepackage{algorithmic}
\usepackage{amsmath}
\usepackage{amssymb}
% \usepackage[square,comma,sort]{natbib}
\usepackage{authblk}
\usepackage{color}
\usepackage{wrapfig}
\usepackage{subfig}
\usepackage{listings}

\newcommand{\HRule}{\rule{\linewidth}{0.5mm}}
\pdfoutput=1
\relax
\pdfcompresslevel=9             %-- 0 = none, 9 = best


% \title{Project XYZ : }
\newcommand{\alert}[1]{\textcolor{red}{#1}}
\newcommand{\code}{\texttt}
\lstset{language=C,boxpos=b}

\begin{document}

\begin{titlepage}

  \hfill

  \vspace{4cm}

  \begin{center}
    {\bf\huge Transactional Memory Support for Data Flow Applications and Architectures}

    \vspace{0.4cm}

    \HRule

    \vspace{0.4cm}

    {\Large Data flow architectures that support data flow applications parallelized through transactional memory}\\[1cm]
  \end{center}

  \vspace{3cm}
  \setlength{\parindent}{0in}
  {\large
  {\bf Roberto Gioiosa}\footnote{\texttt{roberto.gioiosa@bsc.es}}: Barcelona
  Supercomputing Center, Spain \\[0.2cm]
  {\bf Roberto Giorgi}\footnote{\texttt{giorgi@dii.unisi.it}}: University of Siena, Italy \\
  }

  \vspace{3cm}
  
  \begin{center}
    \today
  \end{center}
\end{titlepage}

\section{Objectives}
In this document we attempt to adapt data-flow models (DFM) and data-flow architectures (DFAs) to run transactional memory (TM) applications.
The general intuition is that data-flow threads are suitable to run transactions, as each data-flow thread can be considered an atomic entity that can be restarted easily if a transaction aborts.

\section{Background}
\label{sec:intro}
\nocite{Giorgi07a}
Data-flow models (DFMs) have the potentiality to exploit higher level of thread parallelism by executing tasks that do not have data dependencies in parallel.
\begin{figure*}[h]
  \centering
  \subfloat[Original code]{
    \centering
    \lstinputlisting{lists/data-flow.lst}
    \label{fig:data-flow-code}
  }
  \subfloat[Task dependency graph]{
    \centering
    \includegraphics[width=0.3\textwidth]{images/dataflow-graph.pdf}
    \label{fig:data-flow-graph}
  }
  \caption{Example of a sequential program parallelized through data-flow paradigm}
  \label{fig:data-flow}
\end{figure*}
Figure~\ref{fig:data-flow} shows a typical example of a sequential program parallelized with the data-flow paradigm: instructions are divided into four tasks, $T1$, $T2$, $T3$, and $T4$, that are control-independent, i.e., each task performs a set of instructions that are independent from the others (no branches).
In this scenario, the only dependencies among tasks are data-dependencies determined by the flow of data among tasks.
Tasks that are also data-independent (such as $T2$ and $T3$ in Figure~\ref{fig:data-flow}) can be performed in parallel.
Efficient execution of data-flow programs may require special architectures, denominated data-flow architectures (DFAs).
In this document we assume the DTA-C~\cite{Giorgi07a}.

Parallel applications, however, already execute control-independent code while explicit synchronization mechanisms are employed to coordinate accesses to shared memory locations.
Transactional memory (TM) is a parallel programming model that aims at simplify synchronization by raising the level of abstraction from syntax to semantics.
Threads mark compound statements as atomic, with the expectation that an underlying transactional memory system will execute in parallel, whenever possible.
Transactions optimistically execute in parallel, with roll-back and retry when a conflict occurs.
In this document we attempt to extend the DFMs to TM, leveraging the fact that data-flow tasks are managed as independent entities and, thus, can be easily rolled-back.

Future processors are envisioned to have hundreds of cores: Although parallel applications already execute several independent threads, using all these cores to run the application's threads might not be the most efficient solution.
There are essentially two main reasons: First, the overhead introduced by TM may be well out-weight the parallelism gained, especially for software transactional memory solutions (STMs)~\cite{Cascaval08,kestor:pact11}.
Second, Amdahl's law limits the possible speedup to the inverse of the sequential sections of the code.
Kestor et all~\cite{kestor:pact11} proved that using some hardware threads to support computation rather than to directly executing more application threads may provide considerable benefits.
Likewise, we target to use some of the cores in a data-flow architecture to support the execution of sequential regions of the application, reducing the amount of sequential operations and, thus, pushing forward the theoretical speedup limit set by the Amdahl's law.

\section{Transactional memory for data-flow architectures}
Running TM applications on DFAs requires the extension of the concept of \emph{task} and hardware support to handle conflict detection and abort/restart.

For simplicity, we start addressing the problem described in the following scenario: an application consists of several (P)threads that are expected to be executed in parallel.
Application threads use transactions to synchronize accesses to shared memory locations, thus the underlying TM system takes care of conflicts and eventually rolls back aborted transactions.
Since this level of parallelization is explicitly designed and implemented by the programmer, we refer to it as \emph{explicit parallelization}.
Each of this threads performs a sequence of statement, similar to those reported in Figure~\ref{fig:data-flow-code}, thus each thread can be further decomposed into a sequence of tasks using a DFM (\emph{implicit parallelization}), which would speed up the execution of each thread.

\subsection{Data-flow and transactional tasks}
Within the same thread, there might be two possible task types: regular \emph{data flow tasks} (DFTs) and \emph{transactional tasks} (TTs).
DFTs behave in the classical data-flow model way: a task is a sequence of control-independent instruction with possible data dependencies (input/output) with other tasks (see Figure~\ref{fig:data-flow-graph}).
In this work DFTs become runnable only once all their inputs are ready (thus, their SC equals 0~\cite{Giorgi07a}).
Once a task is runnable, the scheduler assigns it to a particular core and monitors its execution.

A TT, instead, is a task that performs a transaction that may abort.
As any other task, a transactional task becomes runnable only when all its inputs are ready (SC = 0).
However, to the contrary of regular DFTs, a TT 1) may be forced to re-execute all its instruction if the TM system detects conflicts and abort the transaction, and 2) may get input from tasks that belong to other concurrent threads.
This mean that a transactional task has two kinds of inputs: some inputs are generated by the data-flow decomposition and represent the data dependencies with other tasks in the same threads (\emph{local inputs}).
Local inputs are produced by previous tasks and remain constant during the execution of a task.
The second kind of input consists of shared variables that can be read or modified by concurrent threads (\emph{remote inputs}).
Global shared variables are considered always defined throughout the execution of the application but their values may change during the execution of a task as the result of another thread modifying the content of the variable.
In this part of the document, we assume that a task is either a DFT or a TT.
We leave hybrid tasks that execute transactional and non-transactional code for future work.
%A task $T_i$ that modifies only local variables, $T_1$, $T_2$, and $T_4$ in this example, is a DFT.
%Tasks that modify shared variables, $T_3$ in this example.  instead are TTs.

\begin{figure*}[h]
  \centering
  \subfloat[Original code]{
    \centering
    \lstinputlisting{lists/tm.lst}
    \label{fig:tm-code}
  }
  \subfloat[Task dependency graph]{
    \centering
    \includegraphics[width=0.6\textwidth]{images/tm-df.pdf}
    \label{fig:tm-graph}
  }
  \caption{Example of a sequential program parallelized through data-flow paradigm}
  \label{fig:tm}
\end{figure*}
Figure~\ref{fig:tm-code} shows an example of a TM application: in this code access to the shared variable $b$ is performed from within transactional code (delimited by \code{begin\_tx} and \code{end\_tx}), while accesses to local variables $a, c, x, y, z$ need not be protected by transactions.
Let's suppose that we run two parallel threads (Thread1 and Thread2 in Figure~\ref{fig:tm-graph}): each thread alternates transactional (executed optimistically) and non-transactional code.
In Figure~\ref{fig:tm-graph} task $T_2$ and $T_3$ becomes runnable once their inputs $a$ and $b, c$, respectively are ready (i.e., produced by task $T_1$).
Similarly, task $T_4$ becomes runnable once $x,y,c$ are all ready.
However, while $T_1$, $T_2$ and $T_4$ only access local variable (thus, are DFTs), $T_3$ also accesses the global shared variable $b$, hence $T_3$ is a TT.
This mean that $T_3$ inputs are both local ($c$) and remote ($b$).
Let's assume that, in the example in Figure~\ref{fig:tm}, Thread1 and Thread2 execute their transaction (task $T_3$) concurrently and than, because of the conflict on $b$,  Thread1 commits while Thread2 aborts.
In this scenario, Thread2 restarts the execution of task $T_3$ again but the value of $b$, in this re-execution, should be the one modified by Thread1.
This means that $T_3$ of Thread1 should ``forward'' the new value of $b$ to the core where task $T_3$ of Thread2 is running (dashed arrow in Figure~\ref{fig:tm-graph}).
We could leverage the same mechanism used to pass data from $T_1$ to $T_3$ with the change that now $b$ is provide by $T_3$ from Thread1 rather than from $T_1$ from Thread2.
If Thread2 aborts its transaction again (e.g., conflicts with other threads), the execution of the transaction (task $T_3$) will restart again and, every time, the most recent updated values of the remote inputs should be forwarded to the core where task $T_3$ from Thread2 is running.
We denote the fact that a task receives the different versions of the same variable multiple times by marking the scheduling counter of $T_3$ as 2+.

\subsection{Conflict detection}
Conflict detection refers to the identification of concurrent accesses to shared memory locations within transactions, if at least one transaction attempts to modify a shared variable.
While the implementation of the conflict detection mechanism itself depends on the TM design system, in DTC, conflict detection could be integrated with the scheduling activity.
This would simplify the task of forwarding shared values modified by a transaction task to aborted transactional tasks.
\alert{To be continued...}




\bibliography{refs}           % The name of your .bib file.
\bibliographystyle{plain}     % The style of your bibliography


\end{document}



%%% Local Variables: 
%%% mode: latex
%%% TeX-master: t
%%% End: 
