\chapter{Streaming Workflows}
\label{chap:stream_work}
\section{Formal Definition}

A streaming workflow consists of a set of independent tasks $V$ and set of communication links $E$.
In characterizing these workflow it is convenient to represent them as a weighted directed acyclic graph (DAG) $G=(V,E)$
A vertex $v_{i} \in V$ in this graph represents an individual processing step, while an edge $e_{ij} \in E (v_{i} \in V;v_{j} \in V)$
represents the data flow from task $v_{i}$ to task $v_{j}$. This representation maintains two special vertexes:
a source task $v_{source}$ that produces data, and a sink task $v_{sink}$ that represents the last computation stage.
Execution cost of a single data item by a task, $v_i \in V$, can be represented in $G$ as a weight of the vertex $et(v_i)$.
Communication cost between two neighbor tasks $v_i$ and $v_j$ can be represented as the 
weight of the edge $e_{ij} \in E$: $wt(e_{ij})$. Finally the critical path across the DAG $G$,
$CP(G)$ represents the path in $G$ with the highest total weight. It is evident that $v_{source} \in CP(G)$ and $v_{sink} \in CP(G)$ since
they represent the input and the output stages. It is beneficial to define two criteria for a task $v_i$: bottom level $bottomL(v_i)$ and top level 
$topL(v_i)$. $topL(v_i)$ is the highest weighted path from the vertex $v_{source}$, while the $bottomL(v_i)$ is the highest weighted path from the
vertex $v_{sink}$. Both $topL()$, and $bottomL()$ operators include the endpoint weights. This definition implies that for any DAG, $G$, which depicts a streaming workflow:
\begin{equation}
\sum_{v_i \in CP(G)}{et(v_i)} + \sum_{e_{ij} \in CP(G)}{wt(e_{ij})}  = topL(v_{sink}) = bottomL(v_{source})
\end{equation}

\section{Task and Data Parallelism}
An example of a streaming workflow is shown in Figure \ref{audio}. Here a decoder task produces a stream of audio data,
which is processed in phase space by the digital filter and in time domain by a scaler. After the processing the audio data
is re-encoded and played back. Additionally phase space representation is used to generate a spectrum power plot, and the scaler 
output is used to produce Volume Unit display for the audio engineer. In order for this system to operate properly all of the 
data processing must keep up with the throughput constraints imposed by the audio stream. Furthermore in case of live audio processing,
this system must be transparent, which implies that latency must be as low as possible. 

In case of the workflow in Figure \ref{audio} if the digital filter task is not able to keep up with the desired throughput 
it will create a bottleneck. In order to mitigate this problem the digital filter task can be $replicated$ across another available computing
node as shown in Figure \ref{audio_rep}. In this configuration each digital filter task processes only a fraction of the input
thereby increasing total throughput. This technique, known as $task\;replication$, can be used by scheduling algorithms in order 
to increase system throughput.

\begin{figure}[h]
	\centering
	\includegraphics[width=1\textwidth]{figures/Audio.eps}
		\caption{Audio processing streaming workflow, that applies a series of linear and phase space filters to a decoded audio stream. Decoder and Playback tasks are source and sink task respectively.}
	\label{audio}
\end{figure}

Just as the tasks can be replicated across several computing nodes, data items can be copied across several processing steps.
This is shown in Figure \ref{audio}, the output of the Volume Unit(VU) Meter task is streamed to the $sink$ task in order to
prevent clipping effects. This implies that the VU and the output task are able to run concurrently thereby decreasing latency
and increasing throughput. This is an example of $task\;parallelism$ which can be exploited by scheduling algorithms in order to
minimize system latency.

\begin{figure}[h!]
\centering
	\includegraphics[width=1\textwidth]{figures/Audiorep.eps}
	\caption{Task replication in an audio processing streaming workflow. Digital Filter task is replicated to increase throughput.}
	\label{audio_rep}
\end{figure}

Finally, in order to minimize communication costs several tasks can be $clustered$ together onto a single computing node, thereby cutting out communication costs.
For example the Scaler, Volume Meter and output tasks in Figure \ref{audio} require significantly less CPU time then the DFT and filter tasks, and can be
moved to a single communication node without creating a bottleneck, while decreasing latency.

\section{Streaming Workflow Scheduling}

The problem of scheduling a streaming workflow $G$ to a set of $P$ processors involves producing a mapping $I$ of $G$ on $P$ in a way
that satisfies given throughput and latency constraints.
In their paper describing MOS\cite{MOS}, Vydyanathan and his team showed that the maximum throughput,$T_{max}$, of $G$ is:
\begin{equation}
T_{max} = \frac{P}{\sum_{v_{i} \in V} {et(v_{i})}}
\end{equation}
Maximum throughput can be scheduled by grouping all of the tasks in $G$ into a single $task cluster$, and running a copy of it on $P$
processors. Similarly a lower bound can also be placed on the latency of a data item, since, as long as the data rate $T \leq P*\sum_{v_{i} \in V} {et(v_{i})}$,
minimum latency can be achieved through mapping the entire workflow on each processor $P$ resulting in latency of:
\begin{equation}
L_{min} = \sum_{v_{i} \in V} {et(v_{i})} + min_{v_{i} \in V}(wt(e_{source \rightarrow i}) + wt(e_{i \rightarrow sink}))
\end{equation}
In general however, computing a schedule that minimizes one of the metrics (latency or throughput), while keeping the other constrained, is NP-hard. \cite{proofs}

\section{Related Work}

The bulk of the theoretical work on complexity of scheduling streaming workflows was presented by Agrawal in \cite{proofs}. In his paper he shows that
most of the minimization problems associated with streaming workflow scheduling are NP-complete. Several algorithms in the literature 
attempt to minimize latency under a throughput constraint $T \leq T_{max}$ in polynomial time. The FCP algorithm presented by Spencer in 
\cite{FCP} iterates over all the tasks in the $G$ and finds the minimum number of replications of task $v_i$ in order to meet the throughput constraint $T$.
Once all the required task are generated, FCP maps them to the available resources in a schedule which leads to the least aggregate latency.\cite{FCP}
The EXPERT system developed by Guirado  in \cite{EXPERT} analyzes all tasks in $G$ and forms task clusters which meet throughput requirements.
Then, in order to minimize communication costs, these task clusters are mapped to the individual computing nodes.
Finally, the MOS algorithm developed by Vydyanathan in \cite{MOS} evaluates the possibility of duplication, clustering and replication of all tasks
and selects the method which leads to the least increase in latency while satisfying the throughput.
In its second pass MOS maps the resulting task clusters onto processors $P$ by merging the task clusters in a way that leads to a largest decrease in latency.

All three of these algorithms provide synthetic and real world performance benchmarks, however none considers the impact of the communication model 
picked for their system. In \cite{MOS} Vydyanathan and his team showed MOS to produce schedules which lead to lower latencies and resource 
utilization under the same throughput constraints as FCP or EXPERT. A plot of FCP and EXPERT latencies normalized against MOS for $k=2$ 
and $P=32$ is shown in Figure \ref{moswin}. MOS produces consistently lower latency schedules compared to it counterparts, and in fact is the 
only algorithm able to produce a schedule at $T=T_{max}$.

\begin{table}[h!]

\centering
\caption{MOS, FCP, and EXPERT compared in a synthetic benchmark $k=2$, $P=32$ \cite{MOS}}
	  \centerline{\includegraphics[width=0.7\textwidth]{figures/MosWin.eps}}
	  \label{moswin}
\end{table}

Since MOS was shown to produce the lowest latency schedules, this work focuses on MOS as a whitebox model to be tested against experimental results.
In their paper, Vydyanathan and his team claim that the variations in communication costs will have a large impact on latency, while the buffering effect
of the system will dampen adverse effects on throughput. In order to characterize these effects we propose a comparison of the MOS model latencies and
throughput against the experimental data collected with an in-house streaming workflow software system called P-land.
