% PACT 2011 submission

\documentclass[preprint,9pt]{sigplanconf}
%\documentclass[times, 10pt,twocolumn]{article}
\usepackage{times}
\usepackage[lined,ruled,algonl]{algorithm2e}
\usepackage{graphicx}
\usepackage{verbatim}
\usepackage{amsmath, amssymb}
\usepackage{caption}
\DeclareCaptionType{copyrightbox}

\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{fact}[theorem]{Fact}

\newenvironment{proof}		{\noindent{\em Proof.}\hspace{1em}}{\qed}
\def\squarebox#1{\hbox to #1{\hfill\vbox to #1{\vfill}}}
\newcommand{\qedbox}            {\vbox{\hrule\hbox{\vrule\squarebox{.667em}\vrule}\hrule}}
\newcommand{\qed}               {\nopagebreak\mbox{}\hfill\qedbox\smallskip}


%\theoremstyle{definition}
\newtheorem{definition}{Definition}
%\theoremstyle{observation}
\newtheorem{observation}{Observation}


% %\theoremstyle{definition}
% \newtheorem*{definition}{Definition}
% %\theoremstyle{observation}
% \newtheorem*{observation}{Observation}


\newcommand{\blocks}{\dashv}
\newcommand{\minval}{\textrm{minval}}
\newcommand{\maxval}{\textrm{maxval}}
\newcommand{\union}{\cup}
\newcommand{\CI}{\textrm{ComputeIndex}}
\newcommand{\LOI}{\textrm{LastOutputIndex}}
\newcommand{\ceil}[1]{\lceil #1 \rceil}
\newcommand{\norm}[1]           {\left\| #1\right\|}
\newcommand{\set}[1]            {\left\{ #1 \right\}}
\newcommand{\abs}[1]            {\left| #1\right|}
\newcommand{\card}[1]           {\left| #1\right|}

\newcommand{\Sc}{\ensuremath{\textit{Sc}}}
\newcommand{\Pc}{\ensuremath{\textit{Pc}}}
\newcommand{\length}{\textit{length}}

%\newcommand{\spath}{\ensuremath{\textit{s}}}
%\newcommand{\p}{\ensuremath{\textit{p}}}
%\newcommand{\dummy}{\ensuremath{\textit{d}}}

\newcommand{\setivals}{\textsc{SetIvals}}

\begin{document}

\title{Efficient Deadlock Avoidance for Streaming Computation with Filtering \vspace{-0.8in}}
\authorinfo{}


\maketitle
 \thispagestyle{empty}

\begin{abstract}

  Parallel streaming computations have been studied extensively, and
  many languages, libraries, and systems have been designed to support
  this model of computation.  % While some streaming computations send
%   data at \textit{a priori} predictable rates on every channel between
%   compute nodes, many natural applications lack this property.  
  In particular, we consider acyclic streaming computations in which
  individual nodes can choose to \emph{filter}, or discard, some of
  their inputs in a data-dependent manner.  In these applications, if
  the channels between nodes have finite buffers, the computation can
  \emph{deadlock}.  One method of deadlock avoidance is to augment the
  data streams between nodes with occasional \textit{dummy messages};
  however, for general DAG topologies, no polynomial time algorithm is
  known to compute the intervals at which dummy messages must be sent
  to avoid deadlock.

  In this paper, we show that deadlock avoidance for streaming
  computations with filtering can be performed efficiently for a large
  class of DAG topologies. We first present a new method where each
  dummy message is tagged with a destination, so as to reduce the
  number of dummy messages sent over the network.  We then give
  efficient algorithms for dummy interval computation in
  series-parallel DAGs.  We finally generalize our results to a larger
  graph family, the \textit{CS4 DAGs}, in which every undirected cycle
  has exactly one source and one sink.  Our results show that, for a
  large set of application topologies that are both intuitively useful
  and formalizable, the streaming model with filtering can be
  implemented safely with reasonable overhead.
\end{abstract}


%------------------------------------------------------------------------- 
\section{Introduction}
\label{sec:intro}

Streaming is an effective paradigm for parallelizing complex
computations on large datasets across multiple computing resources.
Examples of application domains that use the streaming paradigm
include media~\cite{Khai01}, signal processing~\cite{Romein06},
computational science~\cite{Liu06}, data mining~\cite{Gaber05},
and others~\cite{Thies10}.  Languages that explicitly support
streaming semantics include Brook~\cite{brook}, Cg~\cite{Mark03},
%StreamC/KernelC~\cite{Kapasi03},
StreamIt~\cite{tka02},
%Streams-C~\cite{gsak00},
and X~\cite{Franklin06}.
A streaming application is typically implemented as a network of 
\emph{compute nodes} connected by unidirectional communication
\emph{channels}.  Abstractly, the streaming application is a directed
dataflow multigraph, with the node at the tail of each edge (channel)
able to transmit data, in the form of one or more discrete
\emph{messages}, to the node at its head.  In this paper, we consider
only directed acyclic multigraphs.

Many streaming languages and libraries support the synchronous
dataflow (SDF)~\cite{Lee87} model, where, for a given input message
stream, the number of messages consumed and produced by each node on
each channel incident on it is known at compile time.  However, the
assumptions of SDF are not an intuitively good fit for all streaming
applications.  In particular, the node's decision on whether to send
an output message in response to an input, and which subset of output
channels to send messages on, may naturally be data-dependent.  We say
that nodes that can make such decisions at run-time exhibit
\textit{filtering} behavior.  Consider, for example, the simple split/join topology shown in
Figure~\ref{fig:splitjoin}.  In a streaming application, the split
node $A$ might analyze an input and decide to send it to some subset
of its children for further processing. For example, an object
recognition system might receive a video frame and, based on some
initial segmentation and analysis in the split node, might forward
that frame to one or more dedicated modules that recognize particular
types of object.  Each recognizer in turn might or might not trigger a
``success'' message to the join node $D$.  Finally, any information
collected at $D$ might be sent downstream to be merged with other
analyses that were performed in parallel on the same frame.
Two applications of this type are considered in~\cite{ASAP10}.


\begin{center}
	\includegraphics[scale=0.2]{splitjoin}
	\captionof{figure}{A simple split/join streaming topology.}
	\label{fig:splitjoin}
\end{center}
 While filtering behavior can be simulated in an SDF framework by
sending enough extra messages, such workarounds may be both unnatural
for the application programmer and a waste of channel bandwidth.  In
particular, if many channels in the streaming application share the
same physical resource (e.g.\ a common bus or network connection, or
one CPU handling multiple message queues), the ability to filter might
make the difference between an efficient application and one that
suffers communication bottlenecks.  In Section~\ref{sec:empirical}, we
provide empirical evidence that the number of messages sent across
channels can make a difference in real performance for streaming
applications.

This work addresses the challenge of safely realizing streaming
applications when nodes are permitted to filter.  For most streaming
languages, the programmer is allowed to assume infinite buffer
capacity on channels that connect compute nodes.  In practice,
however, the compiler allocates finite channel buffers.  With finite
buffers, a filtering application can deadlock, even if it has no
directed cycles (this is not true for SDF DAGs).  If a language
provides infrastructural support for combining computational modules
into a streaming topology with filtering, that language's compiler and
runtime should ensure that such deadlocks are avoided.

Li et al.~\cite{SPAA10} formally modeled streaming computation DAGs
with filtering and derived the precise conditions under which deadlock
can occur in such DAGs.  They gave two algorithms for deadlock
avoidance that work by sending occasional ``dummy messages'' between
nodes.  These algorithms were called the \emph{Propagation Algorithm}
and the \emph{Non-propagation Algorithm}.  Both these algorithms may
be useful under different types of conditions.  However, the intervals
at which each node must emit dummy messages to avoid deadlock while
minimizing dummy message traffic are in general challenging to
compute.  In particular, Li et al's algorithms for computing
dummy-message intervals run in worst-case time exponential in the size
of the application's topology, raising the question of whether a safe
filtering paradigm can be implemented efficiently as part of compiling
a streaming application.

In this work, we show that for a large class of intuitive and useful
DAG topologies, deadlock avoidance in the presence of filtering can be
guaranteed efficiently.  Our contributions are:
\begin{enumerate}
\item We present a new version of the Propagation Algorithm.  In Li
  et al.'s propagation algorithm, a node always forwards any dummy
  message it receives along all its outgoing edges.  We propose a
  \emph{destination-tagged propagation algorithm}, where every dummy
  message is tagged with a specific destination and does not propagate
  past this destination, potentially reducing the communication and
  computation overheads due to dummy messages.

\item We provide efficient algorithms to compute dummy message
  schedules that guarantee deadlock freedom for both the
  destination-tagged propagation algorithm and the original
  Non-propagation algorithm of~\cite{SPAA10} when the application
  topology is a series-parallel (SP) DAG~\cite{Valdes79}.

\item We then extend these results to a larger family of topologies,
  the CS4 DAGs, that permit limited communication between parallel
  branches of a computation. We precisely characterize the structure
  of CS4 DAGs and use this structure to extend our efficient deadlock
  avoidance algorithms to them.  The CS4 DAGs represent an abstraction
  that balances expressibility with efficiency of deadlock avoidance.
\end{enumerate}

% This paper is organized as follows: Section~\ref{sec:background}
% provides the background on filtering applications, dummy messages, and
% SP-dags, and Section~\ref{sec:empirical} provides empirical evidence
% that the number of dummy messages sent in a network can have a
% significant impact on the performance of the application.
% Section~\ref{sec:destTagged} introduces destination tagged dummy
% messages.  Section~\ref{sec:sp-dags} provides efficient deadlock
% avoidance algorithms for SP-dags.  Sections~\ref{sec:cs4} and
% \ref{sec:sp-ladder-dummy} introduce CS4 DAGs and provide efficient
% deadlock avoidance for them respectively.

\subsection*{Related Work}

% Synchronous dataflow (SDF)~\cite{Lee87} is a dataflow model in which
% the data consuming and producing rate are static. The rates are known
% at compiling time so that a compiler can yield a valid schedule to
% avoid deadlocks. SDF does not support filtering semantics, which makes
% it inappropriate to applications with dynamic data rates.

SDF was generalized to Dynamic Data Flow (DDF) by Lee~\cite{Lee91} and
Buck~\cite{Buck94}.  In a DDF graph, firing of nodes can be determined
through the use of of an explicit boolean-valued~\cite{Lee91} or
integer-valued~\cite{Buck94} control input.  In the model of Li et
al.~\cite{SPAA10}, this control information is encapsulated within the
node and is therefore unavailable to the compiler and/or scheduler.
Here, synchronization between multiple streams into each node is
supported via the use of a non-negative sequence number associated
with each data item.

StreamIt~\cite{tka02} is a streaming language and compilation toolkit
that supports slightly generalized SDF semantics.  Applications in
StreamIt are constructed from three topology primitives: pipeline,
split-join, and feedback. While these three primitives generate
hierarchical application topologies that facilitate compiler analysis,
they limit the kinds of streaming topologies that StreamIt can support
well~\cite{Thies10}. In this paper, we will discuss broader classes of
DAG topologies than those that StreamIt supports.  Moreover, unlike
StreamIt's split/join structures, which have special, language-defined
semantics such as round-robin or broadcast, split and join nodes in
this work can perform arbitrary computation and filtering just like
any other node.


\section{Background} \label{sec:background}

In this section, we describe some of the background for filtering
applications, deadlock avoidance, and SP-DAGs.

\subsection{Model of Streaming Applications with Filtering}

A streaming application in the model of Li et al.\ has a DAG topology
of computation nodes connected by reliable, one-way communication
channels, each of which has a finite channel buffer.  Input messages
arrive at a unique first node of the application, are labeled with
monotonically increasing sequence numbers, and all channels are
assumed to deliver messages in FIFO order. A node accepts an input
with sequence number $i$ when, for each of its input channels, the
head of the channel buffer contains a message with sequence number
$\geq i$.  All messages with sequence number $= i$ are consumed
together, and they may result in messages with sequence number $i$
being sent on any subset of the node's output channels.  If an input
to a node does not result in an output on a given channel, we say that
the node \textit{filters} the input with respect to that channel.

Li et al.\ observed that in the presence of finite buffers between
nodes, filtering behavior can lead to deadlock, as illustrated in
Figure~\ref{fig:example}.  If the buffer from $A$ to $C$ is empty
because $A$ filters its output to $C$ and the buffers from $A$ to $B$
and $B$ to $C$ are full, the application is deadlocked.  $A$ must wait
for $B$ to consume an input before it can proceed; $B$ must wait for
$C$ to consume an input; and $C$ must wait until it sees an input from
$A$.

\begin{figure}[bth]
\centering
\includegraphics[width=0.8\columnwidth]{figure1.pdf}
\caption{A deadlock condition in a streaming application.}
\label{fig:example}
\end{figure}

Theorem 2.1 of \cite{SPAA10} shows a deadlock can arise in a DAG $G$
only through the creation of a \emph{blocking cycle}.  Any undirected
cycle $C$ of a DAG has at least one node with two outgoing edges and
one with two incoming edges.  More generally, any cycle of $G$ can be
decomposed into a sequence of nodes where alternating nodes have two
incoming and two outgoing directed paths on $C$.  Roughly, a deadlock
can occur whenever each of these nodes has a directed path with
completely full buffers on one side, and an oppositely directed path
with completely empty buffers (due to filtering) on the other side.

\subsection{Deadlock Avoidance Through Dummy Messages}

To avoid deadlock, Li et al.\ proposed two algorithms in which nodes
periodically send \textit{dummy messages} -- content-free messages
whose sequence number is that of some input that was filtered by the
node.  The idea of dummy messages originates in the parallel
discrete-event simulation (PDES) literature~\cite{Misra86}, which used
null messages for deadlock avoidance in conservative PDES algorithms.

In the first algorithm, the ``Propagation Algorithm'', only nodes with
two outgoing edges on some undirected cycle send dummy messages.  A
dummy is sent on a channel whenever its source has gone too long
without sending a data message on the channel.  Dummy messages may not
themselves be filtered but must be propagated on all output channels
of any node they reach.  In the second algorithm, the
``Non-propagation Algorithm'', \emph{every} node sends dummy messages,
but the dummies need not be propagated past the channel on which they
are emitted.  Either algorithm can be implemented as a ``wrapper''
around each computational node in an application by the language
compiler and runtime, with no participation by the application
programmer.

To implement dummy message-based deadlock avoidance, the language
compiler must use the finite lengths of each channel buffer to
calculate the intervals at which every node must send dummy messages to
ensure safety.  The basic idea behind the calculating dummy intervals
for the Propagation Algorithm is the following. Consider an edge $e$
from node $u$ to node $v$.  Let $F$ be the set of edges starting at
$u$, and let $\mathcal{C}$ be the set of undirected simple cycles that
contain both $e$ and another edge from $F-\set{e}$.  For a cycle $C
\in \mathcal{C}$, let $L(C,e)$ be the length of a shortest directed
path on $C$ (edge lengths are the buffer sizes on the edges) starting
at $u$ and not containing $e$.  The dummy interval $[e]$ for $e$ is
then given by
\[
[e] = \min_{C \in \mathcal{C}} L(C,e).
\]

The corresponding idea for the Non-propagation Algorithm is the
following. Consider an edge $e$ from node $u$ to node $v$.  Let
$\mathcal{C}'$ be the set of undirected simple cycles containing $e$,
and for each cycle $C \in \mathcal{C}'$, let $L(C,e)$ be as above.
Let $h(C,e)$ be the number of edges in a longest directed path (in
terms of edge count) on $C$ that contains $e$.  The dummy interval for
$e$ is then given by
\[
  [e] = \min_{C \in \mathcal{C}'} L(C,e) / h(C, e).
\]

\begin{figure}[bth]
\centering 
\includegraphics[width=0.8\columnwidth]{dummyintervals.pdf}
\caption{Calculating dummy intervals on an undirected cycle using the algorithms given by Li et al.}
\label{fig:dummyinterval}
\end{figure}


%  Consider an edge $e$
% from node $u$ to node $v$.  Let $F$ be the set of edges starting at
% $u$, and let $\mathcal{C}$ be the set of undirected simple cycles that
% contain both $e$ and another edge from $F-\set{e}$.  For a cycle $C
% \in \mathcal{C}$, let $\spath(C,e)$ be the longest directed path on $C$
% (the weights are the buffer sizes on the edges) starting at $u$ not
% containing $e$.  Let $\p(C,e)$ be the maximal directed path (in terms
% of the number of edges) on $C$ starting at $e$, and let $\card{\p(C,e)}$ be
% the number of hops on that path.  We want to compute 
% \begin{eqnarray*}
%   \dummy(e) = \min_{C \in \mathcal{C}} \spath(C,e)/\card{\p(C, e)}
% \end{eqnarray*}
% Then for each edge $e' \in p$, we set $[e'] = \min\{[e'], \dummy(e)\}$.

The above methods apply to general DAGs, but a direct implementation
of them to compute dummy intervals requires worst-case time
exponential in the size of the DAG (since a DAG may have exponentially
many undirected simple cycles). It is currently unknown whether
polynomial-time algorithms exist for dummy interval computation on
general DAGs.  
% \begin{algorithm}[hbt]
% \SetAlgoNoLine
% \SetAlgoNoEnd
% \KwIn{A system abstracted as graph $G=\{V,E\}$}
% \KwOut{Dummy intervals for each channel}
% \caption{Dummy interval calculation with dummy propagation~\cite{SPAA10}}
% \label{algo:dummyintervals}
% \lForEach{edge $uv \in E$}{$[uv] \gets \infty$} \;
% \ForEach{undirected cycle $C$ of $G$} {
%   \ForEach{node $u$ with two output channels $uv_1$, $uw_1$ on $C$} {
%     let $p_1 = u v_1 \ldots v_m$ be maximal directed path on \mbox{~~~~~}  $C$ starting with $u v_1$ \;
%     let $p_2 = u w_1 \ldots w_n$ be maximal directed path on \mbox{~~~~~}  $C$ starting with $u w_1$ \;
%     $[uv_1] \gets \min([uv_1],|p_2|)$ \;
%     $[uw_1] \gets \min([uw_1],|p_1|)$ \;
%   }
% }
% \end{algorithm}

% \begin{algorithm}[hbt]
% \SetAlgoNoLine
% \SetAlgoNoEnd
% \KwIn{A system abstracted as graph $G=\{V,E\}$}
% \KwOut{Dummy intervals for each channel}
% \caption{Dummy interval calculation without dummy propagation~\cite{SPAA10}}
% \label{algo:dummyintervals2}
% \lForEach{edge $uv \in E$}{$[uv] \gets \infty$} \;
% \ForEach{undirected cycle $C$ of $G$} {
%   \ForEach{node $u$ with two output channels $uv_1$, $uw_1$ on $C$} {
%     let $p_1 = u v_1 \ldots v_m$ be maximal directed path on \mbox{~~~~~} $C$ starting with $u v_1$ \;
%     let $p_2 = u w_1 \ldots w_n$ be maximal directed path on \mbox{~~~~~} $C$ starting with $u w_1$ \;
%     $[uv_1] \gets \min([uv_1],\ceil{|p_2|/m})$ \;
%     \For{$i$ in $2 \ldots m$} {
%       $[v_{i-1}v_i] \gets \min([v_{i-1}v_i],\ceil{|p_2|/m})$ \;
%     }
%     $[uw_1] \gets \min([uw_1],\ceil{|p_1|/n})$ \;
%     \For{$i$ in $2 \ldots n$} {
%       $[w_{i-1}w_i] \gets \min([w_{i-1}w_i],\ceil{|p_1|/m})$ \;
%     }
%   }
% }
% \end{algorithm}

\subsection{SP-DAGs}

\textit{Series-parallel} (SP) DAGs, which were defined by Valdes et
al.~\cite{Valdes79}, intuitively describe a large class of natural
streaming topologies that can be built up recursively via pipelining
and parallel splits and joins.  
\begin{definition}[\textbf{Series-parallel DAG}]

A series-parallel DAG (SP-DAG) is a connected, directed acyclic
multigraph with two distinguished terminals, a source and a sink.  The
set of all SP-DAGs is defined recursively as follows:
 
\textbf{Base}: a source and sink connected by any non-zero multiplicity of
edges is an SP-DAG.

\textbf{Ind.\ 1} (Serial composition, $\Sc$): if $H_1$ and $H_2$ are
SP-DAGs, connecting them by merging the sink of $H_1$ and the source
of $H_2$ yields an SP-DAG $\Sc(H_1,H_2)$.

\textbf{Ind.\ 2} (Parallel composition, $\Pc$): if $H_1$ and $H_2$ are
SP-DAGs, connecting them by merging the sources of $H_1$ and $H_2$,
and the sinks of $H_1$ and $H_2$, yields an SP-DAG $\Pc(H_1,H_2)$.
\end{definition}
We sometimes refer to subgraphs $H_1$
and $H_2$ in the composition operations as \emph{components} of the
composed graph.

\section{Empirical Motivation}\label{sec:empirical}

\input{empirical}



\section{Destination-Tagged Dummy Messages} \label{sec:destTagged}

The original Propagation Algorithm presented by Li
et al.~\cite{SPAA10} can incur unnecessary overheads due to
propagation of dummy messages even if they are no longer necessary.  
With destination-tagged propagation algorithm, we attempt to reduce this overhead.  

Recall that in the Propagation Algorithm, whenever any node receives a
dummy message, it propagates it along all its outgoing edges.
Therefore, if a node $u$ generates a dummy message on edge $(u, v)$,
it is received by all the successors of $v$ in the DAG, even if it is
no longer useful.  We present a new version of the Propagation
Algorithm, called the \emph{destination-tagged propagation algorithm},
for SP-DAGs and CS4 DAGs.  Just like the original Propagation
Algorithm, only the source nodes can generate dummy messages, but
messages are tagged with a destination node $d$.  When a node receives
a dummy message with destination $d$ does not necessarily forward it
along all its edges; it only forwards it along the edges that can
reach $d$.  When it reaches $d$, $d$ does not propagate it any
further.  Therefore, our algorithms have the property that if a source
node $u$ generates a a dummy message with destination $d$ on edge
$(u,v)$, the dummy message only propagates along paths from $v$ to
$d$, and not to all the successors of $b$.  Therefore, this algorithm
can potentially reduce the communication overheads.

Since each source can generate dummy messages for multiple sinks, each
edge can have more than one dummy interval associated with it.
Formally, we represent the \textit{dummy message schedule} of an edge
$e$ as a set $[e] = \set{p_1, p_2, ..., p_k}$, where each $p_i =
(\tau_i, d_i)$ is a \emph{dummy interval-destination pair}.  $\tau_i$
represents an interval at which a dummy message must be sent, while
$d_i$ represents its destination sink.  In addition, each dummy
message pair $p_i$ has a counter $c_i$ associated with it, and the
maximum value of the counter is $c_i$.  A source node uses the dummy
message schedule and the counters to decide when to send dummy
messages along $e$.  In Sections~\ref{sec:sp-dags} and
\ref{sec:sp-ladder-dummy}, we show how to efficiently compute the
dummy message schedules for SP-DAGs and CS4 DAGs respectively, and
also how the nodes behave in order to correctly propagate the dummy
messages.

\section{Efficient Deadlock Avoidance for SP-DAGs}\label{sec:sp-dags}

We now present two algorithms for efficient deadlock avoidance for
SP-DAGs, a destination-tagged propagation algorithm, and a
non-propagation algorithm.  In this section, we first briefly state
the properties of SP-DAGs that allow us to efficiently calculate dummy
schedules for these topologies.  We then describe how to compute the
dummy schedules for both the destination-tagged propagation algorithm and
the Non-propagation algorithm in polynomial time.  In addition, we
also describe each node's runtime behavior while implementing the
destination-tagged propagation algorithm, and prove that the
destination-tagged algorithm guarantees deadlock freedom for SP-DAGs.
The node's runtime behavior of the Non-propagation algorithm is the
same as that described in Li et al., and the correctness follows from
their proof.

\subsection{SP-DAG preliminaries}

The next few lemmas elucidate the undirected cycle structure of
SP-DAGs, which we will exploit later to define efficient deadlock
avoidance algorithms.  In particular, we use the property that every
undirected cycle on an SP-DAG has a single source and a single sink.
We also use the hierarchical decomposition structure of SP-DAGs to
efficiently compute dummy message schedules.

\begin{observation}
In an SP-DAG, every node has an immediate postdominator (follows
trivially from single-sink property).
\end{observation}

\begin{lemma}
\label{lemma:sp1.1}
In an SP-DAG $G$, let $Z$ be a node with at least two outgoing edges.
Let $W$ be the immediate postdominator of $Z$. Then for any directed
path $P$ from $Z$ to $W$, $Z$ dominates all nodes of $P$ other than
$W$.
\end{lemma}
% \begin{proof}
% By induction on the structure of $G$.

% \textbf{Base}: in an SP-DAG with a single multi-edge, $P$ is a single
% edge from $Z$ to $W$.  $Z$ trivially dominates itself.

% \textbf{Ind.}: Otherwise, $G$ is either $\Sc(H_1,H_2)$ or $\Pc(H_1,H_2)$
% for SP-DAGs $H_1$, $H_2$.  If $Z$ is the source of $G$, then $Z$
% trivially dominates all of $G$, since SP-DAGs have a single source.
% $Z$ can not be the sink of $G$ since the sink has no outgoing edges.

% Now $Z$ lies either in $H_1 - H_2$ or in $H_2 - H_1$, or $G = \Sc(H_1,
% H_2)$ and $Z$ is the sink of $H_1$ and the source of $H_2$.  If $Z$ is
% in $H_1 - H_2$, then $H_1$'s sink always postdominates $Z$, so $W$,
% the immediate postdominator of $Z$, is a node in $H_1$.  Applying the
% IH to subgraph $H_1$, the Lemma holds for $Z$ and $W$.  Analogous
% reasoning holds if $Z$ is in $H_2 - H_1$.  Finally, if $Z$ is the
% source of $H_2$ and the sink of $H_1$, then $W$ is in $H_2$ and $Z$
% dominates all of $H_2$.
% \end{proof}


\begin{lemma}
\label{lemma:sp1.2}
Let $G = \Pc(H_1,H_2)$ be an SP-DAG, where $X$ is its source and $Y$
is its sink. Let $Z$ be a node of $H_1 - \set{X,Y}$ that has at least two
outgoing edges $e$ and $e'$ in $G$. Let $C$ be an undirected simple
cycle that contains both $e$ and $e'$.  Then $C$ contains no edge
edge $e'' \in H_2$.
\end{lemma}

\begin{lemma}
\label{lem:sp1.3}
  For an SP-DAG $G$ = $\Pc(H_1,H_2)$, any undirected simple cycle $C$
  in $G$ that has edges in both $H_1$ and $H_2$ consists of a pair
  of directed paths $P_1$ through $H_1$ and $P_2$ through $H_2$ that
  connect the source $X$ of $G$ to its sink $Y$.
\end{lemma}

\begin{lemma}
\label{lemma:spdag-cs4}
  Each undirected simple cycle in an SP-DAG $G$ has a single source and a
  single sink.
\end{lemma}
% \begin{proof}
% By induction on the structure of $G$.

% \textbf{Base:} Trivially true for a single multi-edge.

% \textbf{Ind.:} If $G = \Sc(H_1,H_2)$, then the property holds for
% $H_1$ and $H_2$, and their serial composition creates no new cycles.
% Hence the property holds for every cycle of $G$.

% If $G = \Pc(H_1,H_2)$, then every new cycle created by their parallel
% composition connects the common source $X$ of $G$ to its common sink
% $Y$ by directed paths passing through $H_1$ and $H_2$, respectively.
% All such cycles therefore have one source $X$ and one sink $Y$.
% \end{proof}

\begin{lemma} \label{lem:spdag-postdom}
If $X$ is the source for two components with sinks $Y$ and $Z$, and
these components share a common edge, then either $Y$ is a successor
of $Z$ in $G$ or vice versa.
\end{lemma}



\subsection{Destination-tagged Propagation Algorithm}

We now present the destination-tagged propagation algorithm for
SP-DAGs.  We will describe both the compile time algorithm used to
compute dummy schedules for each edge, and the runtime behavior of
nodes.  The calculation of dummy schedules at compile time requires
$O(\card{G}^2)$ time.

In our approach, the source node of each component $H$ of an SP-DAG is
responsible for preventing deadlock on undirected cycles of $H$ that
cross more than one of its sub-components.  Since a node can be a
source for multiple distinct components, it may need to send dummy
messages that target multiple sinks.  Therefore, an edge $e$ from
source $u$ has a dummy message schedule $[e] = \set{p_1, p_2, ...,
  p_k}$, where in each pair $p_i = (\tau_i, d_i)$, $d_i$ is a sink of
some component for which $u$ is the source.  $\tau_i$ is the interval
at which a dummy message must be sent to sink $d_i$.  We keep this
list of pairs sorted by $\tau_i$.  In addition, for each edge, we have
at most one pair for a particular destination.

\subsubsection*{Computing Dummy Message Schedules}
\label{sec:dest-tagged-spdags}

At compile time, we compute the dummy message schedule for each edge
using a recursive decomposition of the SP-DAG as follows:

\begin{enumerate}

\item We first recursively decompose $G$ according to the construction
  rules for SP-DAGs, using e.g.\ the linear-time recognition algorithm
  of Valdes, Tarjan, and Lawler~\cite{Valdes79}. The decomposition
  results in a tree $T$ whose leaves are single (multi-)edge graphs
  and whose internal nodes are labeled with the composition operators
  $\Sc$ or $\Pc$, such that applying the composition operations in
  post-order results in graph $G$. The size of this tree is $O(|G|)$.

\item For every component $H$ of $G$, we compute $L(H)$, which is the
  length of a shortest directed path (with buffer lengths as edge
  weights) from the source of $H$ to its sink. This calculation can be
  done bottom-up on the tree $T$ in $O(|G|)$ time.

\item We then compute schedules for all edges in total time
     $O(\card{G}^2)$ as follows.

\end{enumerate}

\begin{comment}
-----CAN CUT THIS IF SHORT OF SPACE ------
For step 2 of the above procedure, the following algorithm allows us
to use the component tree $T$ to compute shortest paths from source to
sink for each component $H$ of $G$ in $O(|G|)$ time.  For a single
multi-edge $X\to Y$, we can compute a shortest path from $X$ to $Y$ in
time proportional to the number of edges.  Given shortest path lengths
$L(H_1)$ and $L(H_2)$ from source to sink in SP-DAGs $H_1$, $H_2$, we
can compute this length for their composition $H$ in constant time as
follows:
\begin{itemize}
 \item If $H = \Sc(H_1,H_2)$, $L(H) = L(H_1) + L(H_2)$.
 \item If $H = \Pc(H_1,H_2)$, $L(H) = \min( L(H_1), L(H_2) )$.
\end{itemize}
----------------------------------------------
\end{comment}

The schedule computation algorithm performs a post-order traversal of
$G$'s component decomposition tree $T$.  For each component $H$ of
$G$, we have three possibilities.

\textbf{Case 1:} Say $H$ is a leaf of $T$ corresponding to a
multi-edge $X\to Y$.  Let $e$ be one edge of this multi-edge, and let
$\tau$ be the minimum buffer size over all edges other than $e$
between $X$ and $Y$. Set $[e] = \{ (\tau, Y) \}$.  If $X\to Y$ is only
a single edge, then $[e] = \emptyset$.

\textbf{Case 2:} Say $H = \Sc(H_1,H_2)$.  Since $H_1$ and $H_2$ are
joined by a single articulation point, their composition creates no
new simple cycles.  The schedules for edges in $H_1$ and $H_2$ do not
change.

\textbf{Case 3:} Say $H = \Pc(H_1,H_2)$, where $X$ is $H$'s source and
$Y$ is $H$'s sink. Now we add new pairs for each edge $e$
out of $X$ in $H_1$ as follows:
\[
[e] \leftarrow [e] \cup \set{ (L(H_2), Y) }.
\]
Similarly, for each edge $e'$ out of $X$ in $H_2$, we set a new interval
\[
[e'] \leftarrow [e'] \cup \set{ (L(H_1), Y) }.
\]

Finally, to eliminate unneeded dummy messages, we postprocess the
schedule of each edge $e$ as follows.
\begin{itemize}

\item If $[e]$ has more than one pair with the same destination, we
  retain only the pair with the smallest interval $\tau_i$.

\item If $[e]$ contains two pairs $p_a = (\tau_a, d_a)$ and $p_b =
  (\tau_b, d_b)$, such that $d_b$ succeeds $d_a$ and $\tau_b \leq
  \tau_a$, then we remove $p_a$.

%\item Suppose $e$ proceeds out of node $X$, and let edge $e'$
%  precede it.  If $[e']$ contains a pair $(\tau_b, d_b)$, then
%  we can remove all of $[e]$'s pairs $(\tau_a, d_a)$ where $\tau_a \geq
%  \tau_b$ and $d_b$ postdominates $X$.  

\end{itemize}
This postprocessing requires only $O(|G|)$ time per edge.  We now prove that this calculation preserves the invariants we require.

\begin{lemma}
In any edge's dummy schedule $[e]$, there is at most one dummy interval per destination, and the dummy messages are sorted by increasing $\tau$.
\end{lemma}
\begin{proof}
  The first step of postprocessing ensures that there is at most one
  dummy message per destination on an edge.  In addition, since the
  dummy intervals are calculated in post-order, if pair $p_i =
  (\tau_i, d_i)$ comes before pair $p_j = (\tau_j, d_j)$ in the
  original calculation, then $d_j$ is a successor of $d_i$.  Therefore, after step 2 of postprocessing, the schedule is sorted by increasing $\tau_i$. 
\end{proof}



\subsubsection*{Runtime Node Behavior}
\label{sec:propbehavior}

We now describe how the schedules of each edge are used at runtime to
decide when to send dummy messages.  We assume that the pairs of each
edge's schedule $[e]$ are ordered by increasing $\tau$.  To track the
time between successive dummy messages to each destination, edge $e$
maintains a counter $c_i$ for each pair $p_i$.  The value of
counter $c_i$ ranges from 0 to $\tau_i$.

Each time node $X$ processes an incoming message, it acts as follows: 
\begin{itemize}

\item If the message is a dummy (or a real message that is also marked
  as dummy), and $X$ is not its destination, then $X$ schedules a
  dummy message on all its outgoing edges and zeros out all counters
  on these edges.

\item If the message is not a dummy, or is a dummy message with
  destination $X$, then $X$ increments all counters on all outgoing
  edges, starting with the largest $\tau_i$ (end of the list).  If a
  counter $c_i$ on edge $e$ reaches its maximum value, then $X$
  schedules a dummy message with destination $d_i$ along $e$ and
  zeroes out all counters $c_j$ on $e$ with $j \leq i$.
\end{itemize}
In all cases, if $X$ has scheduled a dummy message on an edge $e$, and
is also sending a real message on edge $e$, then it merges the dummy
message with the real message and sends them as a single message.

\subsubsection*{Proof of Freedom from Deadlock}

In order to prove that our destination-tagged dummy message scheme
prevents deadlock, we use the general strategy
of~\cite{SPAA10}. Theorem 2.1 of that work shows that deadlock can
arise in a DAG $G$ only through the creation of a blocking cycle.
Since SP-DAGs have exactly one source and one sink on each cycle, a
blocking cycle consists of one path from the source to the sink with
full buffers and another path from the source to the sink with empty
buffers.  We now show that, because of the design of our dummy message
scheme above, no sequence of messages sent on $G$ can ever give rise
to a blocking cycle, no matter how nodes choose to filter the
non-dummy messages.

\begin{lemma} \label{lem:forward}
Let $H$ be a component of $G$ with source $X$ and sink $Y$.
If $X$ propagates an incoming dummy message, then that message
will reach $Y$.
\end{lemma}

\begin{proof}
A dummy message arriving at $X$ was generated by the source of some
super-component $H'$ of $H$ with sink $Z$. By the properties of
SP-DAGs, $Z$ must be either $Y$ or a successor of $Y$.  In either
case, all paths from $X$ to $Z$ lead through $Y$, so $Y$ will
eventually receive the dummy message.
\end{proof}

\begin{lemma} \label{lem:spdag-ordering}
If an edge's schedule includes pairs $p_i = (\tau_i, d_i)$ and $p_j =
(\tau_j, d_j)$, and $\tau_i < \tau_j$, then $d_j$ is a successor
of $d_i$.
\end{lemma}

\begin{proof}
  Step 1 of postprocessing ensures that $d_i \neq d_j$.  By
  Lemma~\ref{lem:spdag-postdom}, one of these nodes is a successor of
  the other.  If $d_i$ were a successor of $d_j$, then step 2 of
  postprocessing would have removed $p_j$.
\end{proof}

\begin{lemma} \label{lem:maxint}
Suppose that, for edge $e$ out of node $X$, pair $(\tau_i, d_i) \in
[e]$.  For each $\tau_i$ messages that $X$ receives, it sends at least
one dummy message along $e$ that will reach $d_i$.
\end{lemma}

\begin{proof}
Consider a span of $\tau_i$ consecutive messages received by $X$.
Before these messages arrive, counter $c_i$ on $e$ has some value $<
\tau_i$.  One of two cases will occur:
\begin{enumerate}
\item If one of the messages is a dummy that does not target $X$,
  then by Lemma~\ref{lem:forward}, the dummy will reach $d_i$.

\item If all the messages either are non-dummies or target $X$, then
  either counter $c_i$ will increase until it reaches $\tau_i$,
  triggering a dummy message to $d_i$, or some other counter $c_j$, $j
  > i$, will reach $\tau_j$, triggering a dummy message to $d_j$.  By
  Lemma~\ref{lem:spdag-ordering}, we know that $d_j$ is a
  successor of $d_i$, and so this message will pass through $d_i$.
\end{enumerate}
\end{proof}

%\begin{lemma}
%If an edge $e_1$ has a dummy message, then all edges that succeed this
%edge eventually forward this dummy message until it reaches its
%destination.
%\end{lemma}
%\begin{proof}
%From model.
%\end{proof}

%The next lemma shows why it is ok to remove the pairs from step 3.  

%\begin{lemma} \label{lem:thirdstep}
%  Consider an edge $e$ starting at $X$, and say edge $e'$ starting at
%  $Z$ precedes it.  Say $e$ has a pair $p_j = (\tau_j, d_j)$ and $e'$
%  has a pair $p_i = (\tau_i, d_i)$ such that $d_i$ postdominates $X$
%  and $\tau_j \geq \tau_i$.  Then the counter for the pair $p_j$ will
%  never reach its maximum value.
%\end{lemma}

%\begin{proof}
%Due to the properties of SP-DAGs, $Z$ is a predecessor of $X$, and $X$
%can get a message with a particular index only if $Z$ only got it.  In
%addition, since $d_i$ postdominates $X$, every time $Z$ sends a dummy
%message for pair $p_i$, $X$ gets it, and when $X$ gets it, it zeroes
%out all of its dummy message counters.  Since the maximum interval
%between two dummy messages from $Z$ is $\tau_i$, the counter for $p_j$
%will never reach $\tau_j$.
%\end{proof}

\begin{lemma} \label{lem:postproc}
Consider a parallel component $H=\Pc(H_1,H_2)$ with source $X$ and
sink $Y$.  Let $L(H_1)$ be the length of a shortest path from $X$ to
$Y$ through $H_1$.  Consider any edge $e \in H_2$ that starts at $X$.
In any time period during which $X$ receives $L(H_1)$ messages, it
sends (or forwards) at least one dummy message on $e$ with destination
either $Y$ or a successor of $Y$.
\end{lemma}

\begin{proof}
  When the schedule-setting algorithm first processes $H$, it adds
  the pair $(L(H_1), Y)$ to $[e]$.  Postprocessing will remove
  this pair only if $X$ is also scheduled to send a more frequent 
  dummy message to $Y$ or to one of its successors. Hence,
  Lemma~\ref{lem:maxint} guarantees that $X$ will send at least
  one dummy message along $e$ that reaches $Y$ for each $L(H_1)$
  messages it receives.
\end{proof}

\begin{theorem} \label{thm:correct}
If dummy messages are sent as described in Section~\ref{sec:propbehavior},
using the interval-destination pairs computed as described in
Section~\ref{sec:dest-tagged-spdags}, then deadlock cannot occur in $G$.
\end{theorem}

\begin{proof}
  Suppose a deadlock does occur in $G$. Then there must be a blocking
  cycle $C$ in $G$.  Since $G$ is an SP-DAG, $C$ lies in some smallest
  parallel component $H$ and consists of two directed paths $s_1$ and
  $s_2$ joining $H$'s source $X$ to its sink $Y$.

  Suppose WLOG that $s_1$ is full and $s_2$ is empty.  We can
  decompose $H$ into parallel sub-components $H_1$ and $H_2$ such that
  $s_1 \subseteq H_1$ and $s_2 \subseteq H_2$.  By construction, the
  total length of all edges' buffers along path $s_1$ is $\geq L(H_1)$,
  while that along $s_2$ is $\geq L(H_2)$.

  Now consider the first edge $e$ on path $s_2$, which leaves source
  $X$.  This edge lies in component $H_2$.  For $s_1$ to fill, $X$
  must have received and passed on at least $L(H_1)$ messages. But
  then Lemma~\ref{lem:postproc} guarantees that $X$ has sent a dummy
  message along $e$ within its last $L(H_1)$ received messages. This
  dummy will eventually propagate to $Y$, where it will allow $Y$ to
  consume at least one of the buffered messages from $s_1$.  Since
  $s_1$ remains full, we conclude that the dummy must still be
  somewhere on path $s_2$, and so $s_2$ cannot be empty.  This
  contradicts our assumption that cycle $C$ is blocking.
\end{proof}



\subsection{Non-propagation Algorithm}
\label{sec:nonprop-spdags}

We now show how to efficiently calculate dummy intervals for the
Non-propagation Algorithm of~\cite{SPAA10} when the graph topology is
restricted to be an SP-DAG. The approach is broadly similar to that
for the Propagation Algorithm, except that the schedule $[e]$ for an
edge $e$ now consists of only a single pair whose destination is the
node at the end of the edge.  For this section, we therefore adopt the
convention that $[e]$ is a single number, the dummy interval for $e$.
In addition, all nodes (and not only a source) may generate
dummy messages on its outgoing edges.  

\subsubsection*{Dummy interval calculation}

In \cite{SPAA10}, the chosen interval $[e]$ minimizes
a ratio between the length of a component-dependent shortest path and
the number of hops in an edge-dependent longest path.  We will compute exactly the same quantity in this paper.  

Our algorithm for dummy interval computation is as follows.
\begin{enumerate}
\item Decompose the graph into a tree of components.
\item Compute $L(H)$ for each component $H$, where $L(H)$ is the
  shortest path from $H$'s source to $H$'s sink, with buffer lengths
  as edge weights.
\item Compute $h(H)$ for each component $H$, where $h(H)$ is the
  longest path (in terms of the number of hops) from the source of $H$
  to its sink.
\begin{itemize}
\item For a single multi-edge, $h(H) = 1$.
\item If $H = \Sc(H_1, H_2)$, $h(H) = h(H_1) + h(H_2)$.
\item If $H = \Pc(H_1, H_2)$, $h(H) = \max( h(H_1), h(H_2) )$.
\end{itemize}
\item Compute $h(H,e)$ for each edge $e\in H$, where $h(H)$ is the
  longest path (in terms of the number of hops) from the source of $H$
  to its sink that passes through $e$.  For a single multi-edge,
  $h(H,e) = 1$.  For a series composition, for all $e \in H_1$,
  $h(H,e) = h(H_1, e) + h(H_2)$.  Similarly for $e \in H_2$, $h(H,e) =
  h(H_2, e) + h(H_1)$.  For parallel composition, if $e \in H_2$,
  $h(H,e) = h(H_1,e)$.  Similarly for $e \in H_2$.  All these
  computations can be done in $O(\card{G}^2)$ time.
\item Compute the dummy interval $[e]$ for each edge $e$ in a
  bottom-up fashion.
\end{enumerate}

The first four steps in the above procedure are straightforward.  For
the fifth step, we visit the components of $T$ in post-order.  When
considering component $H$, we update $[e]$ for all the edges in $H$
considering only cycles internal to $H$.

\textbf{Case 1:} If $H$ is a multi-edge from $X \to Y$, let $e$ be an
edge from $X$ to $Y$.  If we consider only cycles internal to $H$,
$L(H,e)$ is the minimum buffer size over all edges other than $e$
between $X$ and $Y$, and $h(H,e) = 1$.  Therefore, the calculation in
this case is identical to the calculation for the Propagation
Algorithm.

\textbf{Case 2:} If $H = \Sc(H_1,H_2)$, serial composition introduces
no new simple cycles through $e$, so $[e]$ is unchanged.

\textbf{Case 3:} If $H = \Pc(H_1,H_2)$, suppose WLOG that $e$ is in
$H_1$.  Let $X$ be the source of $H$, and let $Y$ be its sink.  Every
new cycle created by the parallel composition consists of two
confluent paths from $X$ to $Y$, one in each of $H_1$ and $H_2$. Let
$C$ be the newly created cycle that traverses a longest (in hop count)
directed path in $H_1$ that includes $e$ and returns via a shortest
(in buffer length) path in $H_2$.  Then the ratio $L(C,e) / h(C,e)$
for $C$ is minimum among all new cycles created by the composition.
Since, $L(C,e) = L(H_2)$ and $h(C,e) = h(H_1,e)$, we have $[e] =
\min([e], L(H_2) / h(H_1,e))$.  The symmetric computation applies if
$e$ is in $H_2$.

Each case above takes constant time per edge in the component $H$, or
$O(\card{G})$ time per component. Conclude that the entire tree
traversal is $O(\card{G}^2)$.

\subsubsection*{Runtime node behavior and correctness}

The behavior of nodes is exactly as described in~\cite{SPAA10}.  Briefly, a node sends a dummy message along an edge $e$ if it filters $[e]$ continuous messages on edge $e$.  Correctness follows from the correctness proof in~\cite{SPAA10}.


\begin{comment}

\begin{theorem}
For any SP-DAG $G$ s.t.\ no multi-edge of $G$ has more than $O(1)$
edges, the maximum safe dummy intervals for the non-propagation
deadlock avoidance algorithm can be computed for all edges of $G$ in
time $O(\card{G}^2)$.
\end{theorem}
\begin{proof}
The quantity we want to compute for each edge $e$ of $G$ is defined as
follows. For a given cycle $C$ that passes through $e$, let $X$ and
$Y$ be the unique source and sink on $C$. Then $e$ lies on some
directed path $P$ from $X$ to $Y$ in $C$.  Let $h(C,e)$ be the total
number of hops (i.e.\ edges) in $P$, and let $L(C,e)$ be the total
buffer length of the directed path $C - P$.

We want to compute ${e} = min_{C \in \mathcal{C}} L(C,e) / h(C,e)$, where
$\mathcal{C}$ is the set of simple cycles $C$ passing through $e$.

Just as in the propagation algorithm, we can
compute $L(H_1)$ and $L(H_2)$ for a parallel composition in time
$O(\card{G})$.  Let $h(G)$ be the number of hops on a longest (in hops)
path from source to sink in $G$. We can compute $h(H)$ for every
component $H$ of $G$ in time $O(\card{G})$ using the 

\begin{algorithm}[htb]
	\label{algo:nonprop-sp}
	\SetAlgoNoLine
	\SetAlgoNoEnd
	\DontPrintSemicolon
	\caption{Set dummy interval on SP-DAGs for the Non-Prop. Algo.}

	\emph{/* $[e]$: dummy interval for $e$; $|e|$: length of $e$ */} 

	\If{$G$ is a single multi-edge $X \to Y$}{
		\ForEach{edge $e$ in the multi-edge}{
      		    $h(e) \gets 1$ \;
  			$[e]    \gets min_{e' \in X \to Y - e} |e'|$ \;

		}
		$L(G)	\gets min_{e' \in X \to Y} |e'|$ \;
		$H(G)	\gets 1 $ \;
	}
	\ElseIf{$G = \Pc(H_1, H_2)$}{ 
		\emph{/* parallel composition */}

     	\ForEach {edge $e$ in $H$}{
        	\If{$e$ is in $H_1$}{
				$[e]   \gets min([e], L(H_2) / h(H_1,e) )$ \;
			}
        	\Else{
        		$[e]   \gets min([e], L(H_1) / h(H_2,e) )$ \;
			}
     		\emph{/* $h(e)$ is unchanged for every edge */} 
		}
		$L(G)	\gets \min(L(H_1),L(H_2)) $ \;
		$h(G)	\gets \max(h(H_1),h(H_2)) $ \;
	}
	\Else{ 
		\emph{/* series composition; $G = \Sc(H_1, H_2)$ */} \

     	\ForEach {edge $e$ in H}{
        	\If{$e$ is in $H_1$}{
        	   $h(H,e) \gets h(H_1,e) + h(H_2)$\;
			}
        	\Else{
        	   $h(H,e) \gets h(H_2,e) + h(H_1)$\;
			}
     		\emph{/* $[e]$ is unchanged for every edge */} \
		}
		$L(G)	\gets L(H_1) + L(H_2) $ \;
		$h(G)	\gets h(H_1) + h(H_2) $ \;
	}
\end{algorithm}

\end{comment}



\section{CS4 DAGs: a Larger Set of Simple Streaming Topologies}
\label{sec:cs4}

We have shown how to efficiently prevent deadlock in SP-DAGs -- a
large, practically useful class of DAG topologies that can be
constructed with simple composition operations.  A natural question at
this point is, do there exist ``natural'' topologies that are not
SP-DAGs? Might these topologies also have efficient algorithms for
deadlock avoidance?


Figure~\ref{fig:nonSP} shows two simple two-terminal DAGs that are not
SP-DAGs.  The topology on the left augments a trivial split/join with
a one-way communication channel linking its two sides; it is perhaps
the simplest DAG that is not series-parallel. The topology on the
right adds slightly more complexity, creating a ``butterfly''
structure like that commonly used to decompose large FFT computations.
A key feature distinguishing the two graphs is that, in the left-hand
example, every undirected simple cycle has only one source and one
sink.  This property is true for SP-DAGs, and we exploited it
implicitly in the algorithms of the previous section.  On the other
hand, the butterfly graph contains a cycle \textit{a-c-b-d} with two
sources and two sinks.

\begin{figure}[htb]
\centering
\includegraphics[scale=0.3]{simple_ladder_n_butterfly}
\caption{two simple non-SP DAGs.}
\label{fig:nonSP}
\end{figure}

In this section, we characterize the set of all DAGs whose undirected
cycles each contain one source and one sink.  The next section shows
that all such DAGs are amenable to efficient deadlock avoidance using
generalizations of our algorithms from
Sections~\ref{sec:dest-tagged-spdags} and \ref{sec:nonprop-spdags}.
%butterfly that have cycles with multiple sources and sinks.  For such
%graphs, the programmer (or possibly the compiler) would need to
%construct an alternative topology to obtain efficient deadlock
%avoidance using our methods.

\begin{definition}
Let $G$ be a DAG with a single source and sink.  We say that $G$ is
``CS4'' if every undirected simple \textbf{c}ycle in $G$ has a 
\textbf{s}ingle \textbf{s}ource and a \textbf{s}ingle \textbf{s}ink.
\end{definition}

A streaming application with the butterfly topology of
Figure~\ref{fig:nonSP}B is neither an SP-DAG nor even a CS4 DAG.
However, it can be transformed to topologies with these properties by
removing and redirecting certain graph edges.  To transform this
topology to a CS4 DAG without adding or removing nodes, we remove edge
$ad$ and add a directed edge from $c$ to $d$.  All messages passed
from $a$ to $d$ directly in the original topology would then be routed
via node $c$. However, if we are limited to using only SP-DAGs,
besides removing $ad$ and adding $cd$, we would also need to remove
edge $bd$ and route messages from $b$ to $d$ via node $c$, as Figure
~\ref{fig:butterfly_cs4} shows.
Hence, we can realize the original topology as a CS4 DAG with fewer
changes than are needed to realize it as an SP-DAG.

A practical consequence of the difference between the CS4 and SP-DAG
realizations of Figure~\ref{fig:nonSP}B is that the CS4 DAG requires
removing fewer edges, and hence less forwarding of messages that were
delivered directly in the original topology.  Moreover, the total
number of messages sent is greater for the SP-DAG than for the CS4
DAG.  As our experiments illustrate, reducing the total number of
messages sent by a given node can significantly improve its
real-world performance.

\begin{figure}[htb]
\centering
\includegraphics[scale=0.3]{butterfly2spladder}
\caption{transforming a butterfly to CS4 DAG and SP DAG}
\label{fig:butterfly_cs4}
\end{figure}

We can formally characterize CS4 graphs by the absence of a forbidden
graph minor as follows.
\begin{lemma}
\label{lem:cs4K4}
$G$ is CS4 only if no subgraph of $G$ is homeomorphic to $K_4$,
the complete graph on 4 vertices.
\end{lemma}
Now absence of $K_4$ is a characteristic property of \emph{undirected}
series-parallel graphs~\cite{Duf65}.  Hence, we may expect that CS4
DAGs have an undirected series-parallel structure.  However, this does
not imply that a CS4 DAG is an SP-DAG; our simple four-node graph
above provides a counterexample.  Fortunately, as we now show, it
turns out that just a small amount of extra complexity is needed to
capture all CS4 DAGs.

\begin{definition}
A \textit{2-path cycle} is a DAG consisting of a single source $X$, a single
sink $Y$, and two directed paths connecting $X$ to $Y$ that are
disjoint except at their endpoints.
\end{definition}
\begin{definition}
  Let $C$ be a cycle.  A \textit{chord graph} $H$ is a DAG with a single source
  and sink that connects two vertices of $C$, such that $H$'s source and
  sink lie on $C$.
\end{definition}
\begin{definition}
Let $C$ be a 2-path cycle with paths $P_1$ and $P_2$.  A
\textit{cross-link} is a chord graph that connects a vertex of $P_1$
to a vertex of $P_2$, where neither endpoint of the connection is
$C$'s source or sink.  A \textit{down-link} is a chord graph that is not a cross-link.
\end{definition}

\begin{definition}
An \emph{SP-ladder} $G$ is a DAG consisting of a 2-path cycle with paths
$P_1$ and $P_2$, called the outer cycle of $G$, and one or more chord graphs
$H_1\ldots H_k$, such that:
\begin{itemize}
\item Each $H_i$ is an SP-DAG;
\item At least one $H_i$ is a cross-link;
\item If $G$ contains two chord graphs with endpoints $(u_1, v_1)$ and
  $(u_2, v_2)$, then these chord graphs do not cross; that is, in
  tracing the outer cycle around $G$, we never encounter both $u_2$
  and $v_2$ between $u_1$ and
  $v_1$.  % or $u_1$ or $v_1$ between $u_2$ and $v_2$.
\end{itemize}
\end{definition}

Intuitively, we call $G$ an SP-ladder because it can be viewed as a
2-path cycle ``decorated'' with non-cross-link chord graphs, plus one
or more cross-links connecting the paths, none of which cross each
other.  The cross-links are similar to the rungs of a ladder. Examples
of simple and complex SP-ladders are given in
Figure~\ref{fig:contraction}.

\begin{definition}
Say that a cycle $C$ of SP-ladder $G$ traverses a chord graph $H$ if $C$
passes through a node of $H$ other than its source or sink but is not
confined to $H$.
\end{definition}

%\begin{figure}[bth]
%\centering
%\includegraphics[width=0.6\columnwidth]{SP-ladder.pdf}
%\caption{An SP-ladder graph}
%\label{fig:spladder}
%\end{figure}

\begin{lemma}~\label{lem:cs4ChordTraverse}
If an undirected simple cycle $C$ in $G$ traverses a chord graph $H$,
then $C$ contains a directed path in $H$ from its source $u$ to its
sink $v$.
\end{lemma}

\begin{lemma}
\label{lem:cs4-cycle}
Suppose that $C$ traverses $k \ge 0$ cross-links of $G$.  Then there
is a cycle $C'$ in $G$ with at least as many sources/sinks as $C$ that
does not traverse any cross-link of $G$.
\end{lemma}

\begin{corollary}
\label{cor:splcs4}
Every SP-ladder is CS4.
\end{corollary}
\begin{proof}
Let $C$ be any cycle in an SP-ladder G.  If $C$ traverses $k > 0$
cross-links of $G$, Lemma~\ref{lem:cs4-cycle} guarantees that there is
a cycle $C'$ that does not traverse any cross-links of $G$ with at
least as many sources/sinks as $C$.  Now either $C'$ is confined to some chord graph $H$ of $G$, or $C'$
lies in the graph $G'$ obtained by removing all cross-links from $G$.
$H$ and $G'$ are both SP-DAGs, which are CS4 by
Lemma~\ref{lemma:spdag-cs4}.  Hence, $C'$ has only one source and one
sink.  Conclude that $C$ has only one source and one sink, and so $G$ is CS4.  
\end{proof}

\begin{lemma}
\label{lem:cs4in_spd_spl}
Let $G$ be a DAG with a single source and sink that is CS4.  Then $G$
is a serial composition of one or more graphs $G_1 \ldots G_k$,
s.t. each $G_i$ is either an SP-DAG or an SP-ladder.
\end{lemma}
\begin{proof}
Divide $G$ into subgraphs $G_1 \ldots G_k$ at its articulation points,
so that $G$ is the serial composition of $G_1 \ldots G_k$. If every
$G_i$ is an SP-DAG, we are done.  Otherwise, let $G^*$ be a component
of $G$ that is not an SP-DAG. Now $G^*$ has no internal articulation
points, so it is composed of a 2-path outer cycle cut by one or more
chord graphs.

Let $H_1$, $H_2$ be two chord graphs in $G^*$, with endpoints
$u_1/v_1$ and $u_2/v_2$.  If these subgraphs cross, then there exist
paths $P_1$ connecting $u_1$ and $v_1$ in $H_1$ and $P_2$ connecting
$u_2$ and $v_2$ in $H_2$.  Moreover, $G^*$'s outer cycle contains
$u_1$, $v_1$, $u_2$, and $v_2$ in some alternating order.  Hence, the
union of $P_1$, $P_2$, and this cycle is homeomorphic to $K_4$, and
so $G^*$ (and hence $G$) cannot be CS4.  Conclude that no two chord
graphs of $G^*$ cross.

Now suppose that some chord graph $H$ is not an SP-DAG.  Let $H^*$ be
a smallest subgraph of $H$ that is not an SP-DAG. $H^*$ cannot be a
serial composition of multiple subgraphs, so it is a 2-path outer
cycle with one or more chord graphs, all of which are SP-DAGs. If
$H^*$ had no cross-link, we could decompose it as an SP-DAG via
repeated parallel compositions to extract all of its chord graphs.
Hence, some chord graph $J$ of $H^*$ is a cross-link.

Let $u$, $v$ be the endpoints of $J$, and let $x$, $Y$ be its source
and sink.  The outer cycle of $H^*$ connects these vertices in the
order $x-u-y-v$.  Moreover, there is a path from $u$ to $v$ bypassing
$X$ and $Y$ (through the cross-link) and a path from $X$ to $Y$
bypassing $u$ and $v$ (from $X$ outwards to the source of $H$, then
via the outer cycle of $G^*$ to the sink of $H$, and finally inwards
to $y$).  The union of these two paths and the outer cycle of $H^*$ is
therefore homeomorphic to $K_4$ , and so $H^*$ (and hence $G$) cannot
be CS4.  Conclude that $H^*$, and therefore $H$, cannot exist, and so every
chord graph of $G^*$ is indeed an SP-DAG.

Finally, if no chord graph of $G^*$ is a cross-link, $G^*$ can be
decomposed via repeated parallel compositions to expose all its chord
graphs and so is an SP-DAG.  Otherwise, it is an SP-ladder.
Conclude that every component of $G$ is either an SP-DAG or an
SP-ladder.
\end{proof}

\begin{theorem}
The set of single-source, single-sink CS4 DAGs is exactly the family
of graphs of which each one is a serial composition of one or more
graphs $G_1 \ldots G_k$, s.t. each $G_i$ is either an SP-DAG or an
SP-ladder.
\end{theorem}
\begin{proof}
Lemma~\ref{lem:cs4in_spd_spl} shows that every single-source,
single-sink CS4 DAG is in the claimed family.  Conversely,
Lemma~\ref{lem:cs4K4} and Corollary~\ref{cor:splcs4} show that SP-DAGs
and SP-ladders respectively are CS4.  Serial composition of such
graphs cannot introduce new cycles, so all such compositions remain
CS4.
\end{proof}

\section{Efficient Deadlock Avoidance for CS4 DAGs} \label{sec:sp-ladder-dummy}

We now present algorithms to compute optimal dummy message schedules
for deadlock avoidance on CS4 graphs. Since a CS4 graph is serial
composition of SP-DAGs and SP-ladders, edges on different SP-DAGs and
SP-ladders cannot be on the same simple cycle. Hence, we can first
decompose a CS4 graph into SP-DAGs and SP-ladders, then compute
schedules for edges in each of these subgraphs separately. We have
already described algorithms for SP-DAGs, so here we focus on
SP-ladders.

An SP-ladder can be decomposed into its constituent SP-DAGs as shown
in Figure~\ref{fig:contraction}, where each edge represents an SP-DAG
directed the same way as the edge.  This simplified representation of
an SP-ladder has two paths from the source $X$ to the sink $Y$.  For
convenience, we assume the two paths go from top to the bottom and
distinguish them as the ``left path'' and the ``right path''.  We call
the vertices that connect these paths to cross-links as \textit{corner
  vertices} and mark them from top to bottom, with the vertices on the
left labeled $u_0, u_1 ,u_2,\ldots,u_{k+1}$ and the vertices on the
right path from top to bottom labeled $v_0, v_1,v_2,\ldots,v_{k+1}$.
The source $X = u_0 = v_0$ and the sink $Y = u_{k+1} = v_{k+1}$.  All
other nodes are called \textit{internal nodes}.  This graph has $k$
cross-links, which are numbered from top to bottom as $K_1$ through
$K_k$, and the SP-DAGs on the outer cycle are numbered $S_0$ through
$S_k$ on the left and $D_0$ through $D_k$ on the right.  Note that in
some cases, $u_i = u_{i+1}$, in which case $S_k$ is a graph with a
single node. Figure~\ref{fig:ladderDecomp} illustrates the general
decomposition and this special case.



% An SP-ladder consists of multiple SP-DAGs, as shown in
% Figure~\ref{fig:contract}.  If we contract each of those SP-DAGs into
% a single edge and connect them following the topology of the original
% graph, we can get a simpler SP-ladder DAG, which has two paths
% connecting the source and the sink and cross-links connecting vertices
% on the two paths, as Figure~\ref{fig:contract} shows. For convenience,
% we assume the two paths go from top to the bottom and distinguish them
% as the ``left path'' and the ``right path''.  We mark the vertices
% from top to bottom, with the vertices on the left going from $u_0, u_1
% ,u_2,\ldots,u_{k+1}$ and the vertices on the right path from top to
% bottom as $v_0, v_1,v_2,\ldots,v_{k+1}$.  The source $X = u_0 = v_0$
% and the sink $Y = u_{k+1} = v_{k+1}$.  This graph has $k$ cross links.
% In addition, the cross-links are numbered from top to bottom as $K_1$% through $K_k$ and the SP-DAGs on the outer cycle on the left are
% numbered $S_0$ through $S_k$ and the right as $D_0$ through $D_k$.
% Note that in some cases $u_i = u_{i+1}$ and then $S_k$ is a graph with
% a single node.

\begin{figure}[bth]
\centering
\includegraphics[width=0.8\columnwidth]{contraction.pdf}
\caption{decomposition of an SP-ladder graph}
\label{fig:contraction}
\end{figure}

\begin{figure}[bth]
\centering
\includegraphics[width=0.6\columnwidth]{ladder_structure.pdf}
\caption{general structure of a decomposed SP-ladder graph, including
an example of cross-links sharing an endpoint.}
\label{fig:ladderDecomp}
\end{figure}



\begin{definition}
We say that an undirected simple cycle is \emph{external} if it traverses at
least two of the constituent SP-DAGs.
\end{definition}

% \begin{definition} A node $s$ is an \emph{rung source} if there
%   exists an external cycle $C$ that has $s$ as its source, and $s \neq X$.  
% \end{definition}

% \begin{fact}
%   An internal source is either 
%   (1) $u_i$ if $K_i$ goes from left to right, or (2) $v_i$ if $K_i$
%   goes from right to left.  
% \label{fact:potSource}
% \end{fact}

% FIXME: you use ``external'' a lot in the following discussion, but the
% definition is no longer present.

The following facts about external cycles can be derived using
structural properties of SP-ladders.
\begin{fact}
  Any external cycle with source $X= u_0= v_0$ has a path through
  $S_0$ and another path through $D_0$.  Any external
  cycle with source $u_i$ ($i \neq 0$) has one path going through
  $S_i$ and another path going through $K_i$.  Similarly for 
  source $v_i$ ($i \neq 0$).  All external cycles have corner nodes as
  sources and sinks.
\label{fact:extCyclePaths}
\end{fact}


\begin{fact}\label{fact:ladderCycleProp}
  Consider any external cycle $C$ with source $u_i$.  There are three possibilities:
\begin{itemize}
\item The sink of this cycle is $u_k$, where $i<k<m$ and $K_k$ goes from
  right to left.  In this case, one path on the cycle crosses $K_j$,
  goes through all $v_j$ where $i \leq j \leq k$, and then traverses
  $K_j$.  The other path traverses $S_i$, goes through all $u_j$
  where $i <j < k$.
\item The sink of the cycle is $v_k$, where $i < k < m$ and $K_k$ goes from left
  to right.  In this case, one path on the cycle crosses $K_i$ and
  passes through all $v_j$ where $i \leq j < k$.  The other path
  traverses $S_i$, goes through all $u_j$ where $i \leq j \leq k$ and
  then crosses $K_k$.
\item The sink of the cycle is $Y = u_m = v_m$, the sink of the
  ladder.  One path on the cycle crosses $K_i$ and passes through all
  $v_j$ where $i \leq j$.  The other path traverses $S_i$, goes
  through all $u_j$ where $i \leq j$.
\end{itemize}
\end{fact}
We call the sinks defined in Fact~\ref{fact:ladderCycleProp} the
\emph{potential sinks} of $u_i$.  We can similarly define potential
sinks for an internal source $v_i$.

\subsection{Destination-Tagged Propagation Algorithm}

We now describe the destination-tagged propagation algorithm for
SP-ladders.  Again, only sources send dummy messages.  An SP-ladder
has two types of cycle sources: \textit{internal sources} and
\textit{corner sources}.  The algorithms for internal nodes are
similar to those described in Section~\ref{sec:sp-dags}.  We will
concentrate on describing the algorithms for the corner sources.  We
will describe all the algorithms for some $u_i$, where $u_i$ is a
corner node on the left path of the ladder.  Analogous algorithms can
be derived for nodes on the right path.

The corner sources have two kinds of edges: edges on cross links
$K_i$, and edges on down-links ($S_i$ or $D_i$).  An edge going out of
a corner source $u_i$ has three types of dummy interval-destination
pairs:
\begin{enumerate}
\item $[e]_i$ consists of pairs for messages that stay within
  the chord for which $u_i$ is a source ($S_i$ for down-link, and
  $K_i$ for cross-link).  These are kept sorted by increasing $\tau$
  as in the case of SP-DAGs.

\item $[e]_X$ consists of pairs for nodes $v_k$ where $k > i$, i.e.\
  corner nodes on the opposite side of the ladder from $u_i$

\item $[e]_W$ consists of pairs for nodes $u_i$ where $k > i$, i.e.\
  corner nodes on the same side of the ladder as $u_i$
\end{enumerate}
The second and third lists are stored separately by increasing $k$.
The schedule $[e] = [e]_i \union [e]_X \union [e]_W$.

\subsubsection*{Computing Dummy Message Schedules}


We calculate the dummy message schedules for edges as follows:
\begin{enumerate}

\item Decompose the SP-ladder into the component SP-DAGs, identifying the
  $u_i$'s, $v_i$'s, $S_i$'s, $D_i$'s and $K_i$'s.  In
  addition, mark each edge as either belonging to a cross-link or
  a down-link.  This can be done in $O(\card{G})$ time.

\item Compute $[e]_i$, schedules for all edges due to cycles internal
  to each chord graph , using the algorithm of
  Section~\ref{sec:dest-tagged-spdags}.

\item For all $H \in \bigcup_{0 \leq i \leq k} S_i \cup D_i \cup K_i$,
  compute $L(H)$, which is the length of a shortest path from $H$'s
  source to its sink (in terms of buffer sizes).  Again, this is done
  as shown in Section~\ref{sec:dest-tagged-spdags}.

\item Starting at the bottom of the SP-ladder, for each $u_i$, and for
  each potential sink $t$ of $u_i$, compute $L_s(u_i,t)$, which is
  defined as the shortest directed path starting at $u_i$, going
  through $S_i$ and ending at $t$.  Similarly, define $L_k(u_i, t)$ as
  the shortest directed path starting at $u_i$, going through $K_i$
  and ending at $t$. If $u_i$ is not the source of $K_i$, then
  just set $L_k(u_i, t) = 0$.  Define and compute $L_d(v_i, t)$ and 
  $L_k(v_i, t)$ in a similar manner.

\item Using these $L$ values, update the set of dummy intervals pairs
  for all edges that start at internal sources and at source $X$.  No
  other sets change.
% FIXME: need to refer to pairs, not intervals, here.  But how are
% the pairs set?  Do we add pairs, or just change the intervals
% for existing pairs?  Not clear to me.
\end{enumerate}

For step 1 above, we decompose an SP-ladder into its constituent SP-DAGs
in $O(\card{G})$ time as follows: Identify an outer cycle $C$ for $G$
with left and right sides, using DFS in linear time. For each vertex
$u$ on the left side of $C$, determine (via DFS) whether any directed
path leaving $u$ encounters the right side of $C$ at some vertex $v$
before it encounters the left side again.  If so, the nodes and edges
on all such paths from $u$ to $v$ form a cross link.  Repeat for the
right side of $C$ to identify cross-links directed from right to left.
Now that we have identified all $u_i$'s and $v_i$'s, we can easily
compute $S_i$'s, $D_i$'s and $K_i$'s.


For step 4 above, we compute $L_s(u_i, t)$ and $L_k(u_i, t)$, where $t$ is a potential sink $u_k$ or $v_k$ of $u_i$.  We consider
$u_i$'s in decreasing order of $i$.  In order to compute $[e]_X$ and
$[e]_W$ in sorted order, for a particular $u_i$, we consider $t$ in
increasing order of $k$.  
\begin{eqnarray*}
L_s(u_i, u_i) &=& 0\\
% L_k(u_i, v_i) &=& L(K_i) \mbox{ if $u_i$ is the source of $K_i$} \\
% &=& 0 \mbox{ otherwise}\\
% L_k(v_i, u_i) &=& L(K_i) \mbox{ if $v_i$ is the source of $K_i$} \\
% &=& 0 \mbox{ otherwise}\\
L_s(u_i, t) &=& L(S_i) +  \\
&& \left\{ \begin{array}{cc} L(K_{i+1}) & \mbox{if $v_{i+1} =
      t$,}\\      
                             L_s(u_{i+1}, t) & \mbox{otherwise}
                   \end{array}
                   \right. \\
L_k(u_i, t) &=& \left\{ \begin{array}{cc} 
                 L(K_i) + L_d(v_i,t) & \mbox{if $u_{i}$ is $K_{i}$'s source} \\   
                 0 & \mbox{otherwise}
                   \end{array}\right.
\end{eqnarray*}
Say $t= v_k$, that is, $t$ is on the opposite side of the ladder as
$u_i$.  For each edge $e$ that starts at $u_i$, if $e$ is a cross-link
edge, then set $[e]_X \gets [e] \cup (L_s(u_i, t),t)$, and if $e$ is a
down-link edge, set $[e]_X \gets [e] \cup (L_k(u_i, t), t)$.  On the
other hand, if $t = u_k$, that is, on the same side of the ladder as
$u_i$, then the same updates happen to $[e]_W$.  Since we compute $t$
in increasing order of $k$, these lists are sorted by increasing $k$
The calculations for $v_i$ are analogous.

Now we do some postprocessing to remove some superfluous pairs of
dummy messages.  For the internal dummy pairs, we do the same
processing as SP-DAGs.  For the external dummy messages, we do the
following for the node $u_i$.
\begin{itemize}
\item If any edge $e$ has an internal pair $p_a = (\tau_a, d_a)$ and
  an external pair $p_b = (\tau_b, d_b)$, where $\tau_a \geq \tau_b$,
  then $p_a$ is removed.

\item If a particular edge $e$ has more than one interval with the
  same destination, we keep only the one with the smallest $\tau$.

% \item Say a cross-link edge $e$ has a pair $p_a = (\tau_a, d_a)$ in
%   $[e]_X$, such that $d_a = v_k$.  Also say that either $[e]_W$ or
%   $[e]_X$ has another pair $p_b = (\tau_b, d_b)$, such that $\tau_a
%   \geq \tau_b$.  If $d_b = v_j$ or $d_b = u_j$ for $k\leq j$, then we
%   can remove $p_a$, since any dummy message going to $d_b$ must pass
%   through $d_a$ due to Lemma~\ref{lem:ladderCycleProp}.

% \item Say a down-link edge $e$ has a pair $p_a = (\tau_a, d_a)$ in
%   $[e]_W$ such that $d_a = e_k$.  Also, say that either $[e]_W$ or
%   $[e]_k$ has a pair $p_b = (\tau_b, d_b)$, such that $\tau_a \geq
%   \tau_b$.  If $d_b = v_j$ or $d_b = u_j$ for $k\leq j$, then we can
%   remove $p_a$, since any dummy message going to $d_b$ must pass
%   through $d_a$ due to Lemma~\ref{lem:ladderCycleProp}.

% \item Say $u_i$ has a dummy pair $p_a = (\tau_a, d_a)$ with $d_a =
%   u_l$ with $l \leq k$ on one of its down-link edges.  Say there is a
%   dummy pair $p_b = (\tau_b, d_b)$, which originates either on a
%   down-link edge of $u_l$ ($l<i$) or on a cross-link edge of $v_l$
%   ($l\leq i$).  If $\tau_b \leq \tau_a$ and $d_b = u_k$ or $d_b = v_k$
%   with $k>l$, then $p_a$ can be removed, since a dummy message going
%   to $d_b$ must pass through both $u_i$ and $u_l$ and stay on
%   down-links.

\end{itemize}

\subsection{Runtime Node Behavior}

The behavior of all nodes except the corner source remains the same
as in our Propagation Algorithm for SP-DAGs.  As mentioned above, a
corner source $u_i$ has 3 lists of dummy message pairs, $[e]_i$,
$[e]_X$ and $[e]_W$, where $[e]_i$ is sorted by increasing $\tau$ and
$[e]_X$ and $[e]_W$ are sorted by increasing $k$, where destination is
a corner sink $v_k$ or $u_k$ respectively.  Each dummy pair $p_a =
(\tau_a, d_a)$ has counter $c_a$ associated with it, and the maximum
value of the counter is $\tau_a$.  One other difference from SP-dags
is that in some cases, a dummy message can have more than one
destination.  If that is the case, the dummy message carries the list
of destinations with it.  There are two cases in the runtime
behavior of a corner source $u_i$

\textbf{Case 1: $u_i$ receives a non-dummy message.} For each outgoing
edge $e$, increment the counters for in $[e]_i$, $[e]_X$ and $[e]_W$
starting from the end (decreasing $\tau$ for $[e]_i$ and decreasing $k$
for $[e]_X$ and $[e]_W$).  If a pair $p_a = (\tau_a, d_a)$ reaches its
maximum value, then a dummy message with destination $d_a$ is
scheduled along that edge, and the counter for $p_a$ is zeroed out.
If $d_a$ is an internal destination, then it behaves in the same way
as the SP-dag algorithm.  If $d_a = u_k$ ($k > i$) or $d_a = v_k$ ($k
\geq i$), a corner node, all the counters in $[e]_W$ are zeroed out.
In addition, the following occurs.
\begin{itemize}
% FIXME: these semantics don't make sense without definitions of
% which counters are present on an edge in the new formulation.
\item If $e$ is in a cross-link, then counters for
  pairs in $[e]_X$,to all $v_j$, $j\leq k$, are zeroed out.
\item If the $e$ is in a down-link, then counters for pairs in
  $[e]_W$, to all $u_j$, $j\leq k$, are zeroed out.
\end{itemize}

\textbf{Case 2: $u_i$ receives a dummy message, or a real message also
  marked as a dummy.}  If $u_i$ is the only destination, then no action
need be taken.  Otherwise, destination(s) are always another corner
node.  Consider a destination $d_a = u_k$ ($k > i$) or $v_k$ ($k\geq i$).
\begin{itemize}
\item Say $d_a$ is some $u_k$, or $v_k$, $k>i$, \footnote{If
    there are two cross-links out of $u_i$, then we use the larger index $i$
    to make this decision.} then the message is scheduled on all
  the down-link edges, and the counter for the pairs going to this
  destination are zeroed out.  For a down-link edge $e$, all the
  counters in $[e]_i$ (for all the internal dummy messages) on these
  down-links are zeroed out.  All the counters on $[e]_W$ with
  destination $u_j$, $j\leq k$ are zeroed out.  All the counters (on
  down-links and cross-links) that are not zeroed out are incremented.
\item If $d_a$ is some $v_k$, $k = i$, \footnote{If there
    are two cross-links from $i$, we forward along the one that is
    equal.}  then the message is scheduled on along all the cross-link
  edges and all the counters in $[e]_i$ are zeroed out.  All the other
  counters are incremented.  
\end{itemize}

If $u_i$ wants to send multiple dummy messages on the same edge, then they are merged and a list of destinations is created.  In this formulation, assuming all buffer sizes are non-zero, there are at most 2 destinations for each dummy message.  
In both cases, if the node wants to send both a real message and a dummy message along the same edge, then the real message is also marked as dummy, and a total of one message is sent.  


\subsection{Proof of Correctness}

SP-ladders have the CS4 property that each undirected cycle has at
most one source and one sink.  Therefore, in order for a deadlock to
occur one path from the source to the sink must be full and another
path must be empty.  Here, we show that this can not occur when using
the above algorithm for dummy schedules and node behavior.

The following lemma shows why the node can safely zero out the counters as described in the previous subsection.  

\begin{lemma}\label{lem:spladder-prop}
The following claims are true.
\begin{enumerate}
\item If a corner source $u_i$ forwards a dummy message 
  along an edge of a chord graph, it will go through all the nodes within
  that chord.
\item If a corner source $u_i$ sends or forwards a dummy 
  message along a down-link to some sink $u_k$ or $v_k$, where $k\geq i$,
  this message will go through all the sinks $u_j$, $i \leq j \leq k$.
\item If a corner source $u_i$ sends or forwards a dummy message along
  a cross link $K_i$ intended for $v_k$ or $u_k$, where $k\geq i$, it
  reaches all the nodes $v_j$, $i\leq j\leq k$.
\end{enumerate}
\end{lemma}

The following lemmas are analogous to Lemmas~\ref{lem:maxint}, and
\ref{lem:postproc} for SP-dags.  The proofs are omitted to the appendix.  
\begin{lemma} \label{lem:spladder-maxint}
Suppose that, for edge $e$ out of node $X$, pair $(\tau_i, d_i) \in
[e]$.  For each $\tau_i$ messages that $X$ receives, it sends at least
one dummy message along $e$ that will reach $d_i$.
\end{lemma}


\begin{lemma}\label{lem:intervals}
  Suppose that an external cycle in $G$ starts at $u_i$ and ends at
  $t$.  Every time $u_i$ receives $L_S(u_i, t)$ messages, it sends at
  least one dummy message with destination $t$ along all its cross-link
  edges.  Every time $u$ receives $L_K(u_i,t)$ messages, it sends at
  least one dummy message along all its down-link edges.
\end{lemma}

Using the above lemmas, we can prove the correctness theorem.  
\begin{theorem} \label{thm:ladder-correct}
If dummy messages are sent as described in Section~\ref{sec:propbehavior},
using the interval-destination pairs computed by the above procedure, 
then deadlock cannot occur in $G$.
\end{theorem}

\begin{proof}
  Suppose a deadlock does occur in $G$. Then there must be a blocking
  cycle $C$ in $G$.  WLOG, say that the blocking cycle starts at $u_i$
  and ends at some sink $t$, and one path from $u_i$ to $t$ goes
  through $K_i$ and another one goes through $S_i$.  Say that the path $s_1$
  through $K_i$ is full and the path $s_2$ through $S_i$ is empty.  

  We know that $\length(s_1) \geq L_K(u_i, t)$.  If we consider the
  first edge of path $s_2$, it leaves $u_i$ through its cross-link.
  From Lemma~\ref{lem:intervals}, $u_i$ sends a dummy message along
  this edge every time it gets $L_K(u_i,t)$ messages.  Since this
  message is propagated all the way to $t$, $s_2$ cannot be completely
  empty, which contradicts our assumption that cycle $C$ is blocking.  
\end{proof}

% \subsection{Compile time and runtime efficiency}

% \begin{lemma}
%   For a down-link edge, $[e]_X$ is sorted by increasing $\tau$.  For a
%   cross-link edge, $[e]_W$ is sorted by increasing $\tau$.
% \end{lemma}

% It takes $O(n^2)$ time to compute all dummy message pairs. $[e]_i$
% postprocessing takes $O(n)$ time per edge.  Since $[e]_W$ and $[e]_K$
% are stored witn increasing index of destination, step 1 takes $O(n)$
% time per edge, since it requires a single scan.  Now consider a
% down-link edge.  We can scan through its $[e]_W$ starting from the end,
% remembering the minimum $\tau$ and removing all pairs that have a
% smaller $\tau$.  In addition, since its $[e]_X$ is sorted by $\tau$,
% we can search for each element in $[e]_W$ in $[e]_X$ and scan to see
% if it can be removed.  The whole procedure takes $O(n^3)$ time in the
% worst case.


\subsection{Non-propagation Algorithm}

Computing the dummy intervals for the Non-propagation Algorithm takes
longer than for our destination-tagged propagation algorithm.  Here we
give an $O(\card{G}^3)$ algorithm.

Again, we decompose into constituent SP-DAGs.  As in the
Non-propagation Algorithm for SP-DAGs, for each constituent SP-DAG
$H$, we precompute $h(H)$ as the length of the longest path (in terms
of the number of hops) from $H$'s source to its sink.  In addition,
for each edge $e$ in $H$, compute $h(H,e)$ as the longest path from
$H$'s source to its sink that passes through $e$.  In addition, we
compute the initial estimate of the dummy intervals considering only
the cycles internal to the constituent SP-DAGs.

Now consider every source $u_i$ in the SP-ladder.  We can enumerate
all the potential sinks $t$ for that source using
Lemma~\ref{fact:ladderCycleProp}.  As we defined $L_s(u_i, t)$ and
$L_K(u_i, t)$ we define $h_s(u_i, t)$ is the length of the longest
directed path (in terms of hop count) from $u_i$ to $t$ that goes
along $S_i$ and $h_k(u_i, t)$ as the length of the longest directed
path from $u_i$ to $t$ that goes along $K_i$.

Now consider an edge $e$ in some constituent SP-DAG $H$ along the path
from $u_i$ to $t$.  We can update the dummy interval for $e$ as
follows: If $e$ lies along some path from $u_i$ to $t$ that goes
across $K_i$, then $[e] = L_s(u_i, t)/(h_k(u_i, t) - h(H) + h(H,e))$.
If on the other hand, $e$ lies along some path from $u_i$ to $t$ that
goes across $S_i$, then $[e] = L_k(u_i, t)/(h_s(u_i, t) - h(H) +
h(H,e))$.  We can do the analogous procedure for each potential source
$v_i$.

\textbf{Running time:} There are $O(\card{G}^2)$ source-sink pairs.
For a given pair $u_i$ and $t$, we can calculate $L_s(u_i, t)$,
$L_k(u_i, t)$, $h_s(u_i, t)$ and $h_k(u_i,t)$ using $L$ and $h$
values of the constituent SP-DAGs in $O(\card{G})$ time.  We can also
update all dummy intervals for edges on some path from $u_i$ to $t$ in
$O(\card{G})$ time.  Therefore, the overall algorithm takes
$O(\card{G}^3)$ time.


\begin{comment}

--------------------------------------------------------------------
On this simple DAG, we can now consider each potential source $u_i$.
For each of these sources, consider the potential sinks in order

Using these, we can now consider each potential source $u_i$ and its
potential sinks.  Due to Lemma~\ref{lem:cycleprop}, we know that $C$'s
sink is either some $u_j$ where $j>i$ and $K_j$ goes from right to
left, or some $v_j$ where $j>i$ and $K_j$ goes from left to right.  We can compute


Using these estimates, we can use the 

Now we can compute the dummy intervals due to all cycles with source
$u_i$.  Consider a cycle $C$ that has source $u_j$.  

each cycle with source $u_i$ have sink $u_j$ 

Now, we define $h(u_i, e)$ as the length of the longest directed path
(in terms of the number of hops) that starts at $u_i$, goes through
$e$, and ends either on the right side of the DAG or at $u_j$ where $j
> i$.  The value is defined as $0$ if $e$ is not topologically after
$u_i$.  And we define $h(u_i) = \max_{e \in out{u}} h(u_i, e)$.
Again, we can compute these values for each edge starting at the
bottom of the SP-ladder.  For each $K_i$ if $K_i$ goes from left to
right, then the values for $v_i$ are computed first and vice-versa.

\begin{eqnarray}
h(u_{k+1}, e) &=&  0 \\
h(u_i, e) &=& \begin{array}{cc}
             h(S_i, e) + h(u_{i+1}) & \mbox{if $e \in S_i$} \\
             h(S_i) + h(u_{i+1}, e) & \mbox{if $u_{i+1} < e$} \\
             h(K_i, e) + h(v_i) & \mbox{if $e \in K_i$ and $K_i$ goes from L to R} \\

             0 & \mbox{otherwise}
             \end{array} \\
h(v_i, e) &=& \begin{array}{cc}
             h(D_i, e) + h(v_{i+1}) & \mbox{if $e \in D_i$} \\
             h(K_i, e) & \mbox{if $e \in K_i$ and $K_i$ goes from R to L} \\
             h(D_i) + h(v_{i+1}, e) & \mbox{if $v < e$ but $e \not\in D_i$} \\
             0 & \mbox{otherwise}
             \end{array} 
\end{eqnarray}

Given these values and $L(u_i)$'s we can compute the dummy message
intervals for the Non-Propagation algorithm as follows:  For each edge $e$ topologically after a potential source $u_i$, 


During the decomposition process, for each SP component $H$ with source $x$ and sink $y$, we compute the $h(H)$, the maximal hop count from $x$ to $y$ and $L(H)$, the shortest path from $x$ to $y$. For each edge $e$ in $H$, we compute $h(H,e)$, the maximal hop count from $x$ to $y$ while passing $e$. These computations can be done with Algorithm~\ref{algo:nonprop-sp}. To compute the dummy intervals, we enumerate all cycles by enumerating all cross-link pairs. To save space, we combine the computation of dummy intervals for the Propagation Algorithm and for the Non-Propagation Algorithm into one single algorithm.




Again, we use the
property from Lemma~\ref{lem:laddercycle}.  For each potential source $u$, we consider the potential sinks $u_j$ and $v_j$.  









\begin{lemma}
  Each undirected cycle that is not internal to any of its SP-DAG
  components with node $u_i$ as its source can have three
  configurations:
\begin{itemize}
\item Two directed paths from $u_i$ to the sink of the ladder $Y$. One
  of the paths goes entirely through the outer cycle all the way to
  $Y$ and another through the cross link to $v_i$ and then from $v_i$
  to $X$ through the outer cycle.
\item Two directed paths from $u_i$ to some $u_j$, where $j>i$ and
  cross link $K_j$ goes from right to left.  One path is entirely in
  the other cycle and another one goes through cross links $K_i$ and $K_j$.  
\item Two directed paths from $u_i$ to some $v_j$, where $j>i$ and
  cross link $K_j$ goes from left to right.  One path goes through
  cross link $K_i$ from left to right, and another path goes through
  cross link $K_i$ from left to right.
\end{itemize}
\end{lemma}
\begin{proof}
If we look at the contracted version of the SP-ladder, then we can see that these are the only possibilities.  
\end{proof}

Given this lemma, the idea behind the propagation algorithm is simple.
Since each path starting at $u_i$ either goes over some edge in $S_i$
or some edge in $K_i$, we can compute the shortest paths starting at
$u_i$ that go through $S_i$ and that go through $K_i$, and that can be
on some undirected simple cycle.


\begin{lemma}
\label{lem:spdag-cs4-2}
Each undirected simple cycle on a simple SP-ladder $G$ is composed of
two edges $u_{i1}v_{j1}$, $u_{i2}v_{j2}$ and two paths $u_{i1}\to
u_{i2}$, $v_{j1}\to v_{j2}$. $1 \le i1 \le i2 \le m$, $1 \le j1 \le j2
\le n$.
\end{lemma}  
\begin{proof}
[NEED IMPROVEMENT]
An undirected cycle $C$ on $G$ must have vertices from both the left path and the right path. According to Corollary V.5, C is single-source, single sink. WLOG, suppose the source $s$ is on the left path. $s$ has one outgoing edge on the left path, the other outgoing edge on the cycle must be an edge connecting to the right path. Since no two cross-links cross with each other, so the only way to form a simple cycle is that the two outgoing paths from the source merge at some sink node. No matter on the left path or on the right path, another cross-link is needed. 
\end{proof}





The Non-Propagation algorithm requires more work.  



%To reduce the computation of the shortest path and maximal hop count, for each vertex $z$ we pre-compute $L(z)$ as the path length from the source and $H(z)$ as hop count from the source following the left path or the right path. 
\begin{algorithm}[htb]
	\label{algo:prop-ladder}
	\SetAlgoNoLine
	\SetAlgoNoEnd
	\DontPrintSemicolon
	\caption{Set dummy interval for edges on SP-Ladders}

	\emph{/* $[e]_p$: interval for the Propagation Algorithm*/}

	\emph{/* $[e]_n$: interval for the Non-Prop. Algorithm*/}

	Apply Sc/Pc decomposition on G \;

	\ForEach{Maximal SP-subgraph $H$ with two terminals $x$ and $y$ on $G$}{
		Contract $H$ to an edge $x\to y$ on $G'$
	}

	\ForEach{pair of edges $u_{i1}v_{j1}$ and $u_{i2}v_{j2}$ on $G'$ }{
		Let $C$ be the cycle that has $u_{i1}v_{j1}$ and $u_{i2}v_{j2}$ \;
		Let $P_l$ and $P_r$ be two maximal disjoint paths on $C$\;
		$H_l = \Sigma_{p\in P_l}{h(p)}$ \;
		$H_r = \Sigma_{p\in P_r}{h(p)}$ \;
		$L_l = \Sigma_{p\in P_l}{L(p)}$ \;
		$L_r = \Sigma_{p\in P_r}{L(p)}$ \;
		\emph{/* Update dummy intervals for the propagation algorithm */}

		\ForEach{edge $e\in G$ {AND} $e$ going out of the source}{
				\If{$e$ in a component contracted to $E$ on $P_l$}{
					$newItv = L_r$ \;
				}\Else{
					\emph{/*$e$ in a component contracted to $E$ on $P_r$*/}

					$newItv = L_l$ \;
				}
				$[e]_p = min([e]_p,newItv)$ \;
			}
		\emph{/* Update dummy interval for the Non-Prop. Algo. */}

		\ForEach{edge $e\in G$ in a SP-subgraph $xy$ contracted to $E$ on $C$}{	
			\If{edge $E$ on $P_l$}{
				$newItv = L_r/(H_l+h(H,e)-h(H)))$ \;
			}	
			\Else{
				\emph{/* edge $E$ on $P_r$*/}

				$newItv = L_l/(H_r+h(H,e)-h(H)))$ \;
			}
			$[e]_n = min([e]_n,newItv)$ \;
		}	
	}
\end{algorithm}
\end{comment}

\section{Conclusions}

In this work, we have explored the practicality of a flexible, general
model of streaming computation, introduced by Li et al.~\cite{SPAA10}
which permits computation nodes to arbitrarily filter their inputs.
We have shown that, if the allowed streaming topologies are restricted
to the CS4 DAGs (or, more stringently, to the SP-DAGs), then we can
efficiently compute dummy message intervals for all edges.  In
addition, we have extended one of their dummy message-based algorithms
to reduce the amount of propagation, thereby potentially reducing
overheads.  Hence, if the streaming application programmer agrees to
use such topologies, the compiler and runtime system can guarantee
safe execution of the resulting applications, in a way that is
non-intrusive to application code and that scales even to large and
complex applications.

Our work raises several directions for future research.  One open
question is whether one can further reduce the number of dummy
message, or prove that the overhead of dummy messages is minimal in
some sense.  A second question is whether one can efficiently and
systematically translate arbitrary DAGs to equivalent CS4 topologies
by adding a small number of nodes and edges.  Finally, we plan to
augment an existing language for streaming computation, such as the X
language~\cite{Franklin06}, to support the filtering model.

\bibliographystyle{abbrvnat}
\bibliography{ppopp12}

\newpage

\input{appendix}


\end{document}
