\chapter{Introduction}
\label{chap:intro}

Data analyses which operate on streams of input data are ubiquitous in modern computing. 
From live video and audio processing, to experimental high energy physics feature extraction
and medical imaging, these applications exploit the parallelism of the underlying hardware in order to meet the system requirements. 
Traditionally these problems are solved by employing specialized hardware such as 
Field Programmable Gate Arrays and Digital Signal Processors, due to their low unit cost
and highly parallel instruction and memory architectures. However with the advent of the 
multicore CPUs and GPGPU APIs such as OpenCL and Nvidia Cuda, this set of problems has found
their way onto general purpose computers and clusters. 

A streaming workflow consists of a graph of independent tasks each of which receives data from its predecessor, processes
it and then passes it on to the next task. Efficient execution of these workflows involves finding a schedule which optimizes performance with minimum resource consumtion.
Two metrics generally are considered when solving this scheduling problem: throughput and latency.
Throughput of the streaming workflow is the total processing capacity of the system, while latency of a streaming workflow represents
the amount of time it takes for the system to process a single data item. 
In real-world applications it is usually beneficial to optimize one metric while fixing value for the other.
For example in the case of experimental particle physics, feature extraction and event filter software has to achieve a strict
throughput constraint in order to keep up with detector's output while maintaining reasonable latency. Audio and video
processing software requires low latency operation. At the same time throughput constraints of such applications can be optimized for
processing a maximum number of input channels. Both of these workflows present a multi-objective scheduling problem, with a fixed desired performance
on a given metric, and an optimization of the other.

Several algorithms have been proposed to solve the streaming workflow scheduling problem. Exploiting Pipeline Execution undeR Time constraints(EXPERT)\cite{EXPERT},
Filter-Copy Pipeline(FCP)\cite{FCP} and Multi-objective scheduling heuristic(MOS)\cite{MOS} present novel approaches to generating low latency,
high throughput streaming workflow schedules. Furthermore these algorithms take into account the communication
costs of transferring data between computing nodes, which allows them to create schedules for distributed memory systems. All three of these algorithms
make assumptions about the computational hardware they are producing schedules for. Most notably the k-port model is common across all three algorithms\cite{EXPERT}\cite{MOS}\cite{FCP}.
The k-port model states that each node in a distributed memory system is able to communicate with at most k other nodes. Additionally each communication link
can utilize up to $1/k$ of total bandwidth. This simplistic model does not take into account the network congestion and load balancing which may add jitter
to communication cost and have negative impact on system latency and throughput.

In this work we will characterize effects of real-world and model imposed communication constraints on streaming workflow performance. For this purpose we propose to build a data/processing independent
software system capable of executing generic streaming workflows. This system will execute streaming workflow schedules generated by the MOS\cite{MOS} scheduling algorithm under several communication models.
Data collected from these experiments will be used to characterize its performance discrepancies between communication models and the real world. Three distinct communication constraints are considered:
\begin{enumerate}
	\item {\emph k-port model throttling}: Implementation of a classic k-port model will be included in the system.
	Effects of the throttling constraints imposed by this model will be evaluated.
	\item {\emph Network jitter injection}: Extra network traffic will be injected into our streaming workflow system, in order to increase communication jitter,
	without exceeding network capacity. Effects of network jitter on system performance will be evaluated.
	\item {\emph Network IO Buffer size}: Effects of network communications buffers, both internal to our software and inherent to the communication channels, will
	be examined.
\end{enumerate}
Based on the results obtained in these experiments, we will make recommendations regarding the communication model that should be used by workflow scheduling 
algorithms to produce high-quality schedules.  We will put these recommendations in practice in the context of a streaming workflow based soft real-time analisys
suit for a neutrino detector experiment which is further described in Section \ref{chap:purposed_work:MTC}.
