\chapter{Our system}
\section{P-land}
In order to experimentally characterize the effect of the k-port model and varying communication costs on the streaming workflow performance 
we developed a general purpose streaming app system which is able to operate on a cluster with a
user supplied schedule. P-land (plugin-land) is able to, at run-time, load required processing tasks 
as shared library plugins. If the edge of a streaming workflow connects two tasks on the same node,
a synchronized queue is placed between the processing tasks. Otherwise a k-port implementation is used to route data between 
computing nodes. P-land is written in C++ and relies on two third-party libraries: Boost threading framework for thread management and synchronization
and tinyXML for manipulating and storing XML files. Furthermore CERN's root package is used for data analysis 
and plot generation throughout several utilities which accompany the P-land software package.

\section{Internal implementation}
P-land provides a mechanism to load a task at run-time using the Linux loader API. This allows for the P-land to be reconfigured
for a different workload without restarting the application. Each task plugin is loaded and run in an individual thread,
using the $boost::thread$ threading framework. A task chain for each node is provide via an XML file, generated through an implementation 
of MOS algorithm and distributed via the network from control software. 

P-land is data independent. Data is differentiated via a tag field, with each tag corresponding to a specific data type. Because each task plugin
is supplied with an output and input queue at run-time, plugin authors are provided with a simple framework to develop stream processing software, outlined
in Algorithm \ref{plugin_algo}.

\begin{algorithm}[h!]
\caption{Plugin Framework}
\begin{algorithmic}
\label{plugin_algo}
\STATE $q_{in} \leftarrow input\_queue$
\STATE $q_{out} \leftarrow output\_queue$
\WHILE {$true$}
\STATE $data \leftarrow q_{in}.pop()$
\IF{$data.tag() = this.tag$}
\STATE $process(data)$
\STATE $q_{out}.push(data)$
\ELSE
\STATE $q_{out}.push(data)$
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}

Furthermore, plugin developers do not have to worry about issues of threading and synchronization, since all of the 
internal data structures are thread-safe and thread management is handled internally by P-land.

Communication between nodes is managed using a custom implementation of the k-port model. This implementation 
maintains k communication threads all fed from the same synchronized queue (k-port queue). It is able to throttle each communication thread
to $1/k$ of the link bandwidth. Furthermore, the throttling functionality can be turned off, which allows P-land to be used for performance measurements
with and without the k-port model enabled. Each data item carries a destination field that is internally set by the system 
once the data item is placed onto the k-port queue. A user provided XML file that contains routing information that maps each destination
field to an IP address of a processing node. 

\begin{figure}[h!]
	\centering
	\mbox{\subfigure[Unthrottled transfer]{\includegraphics[width=.5\textwidth]{figures/kportk4.eps}\label{kport:unth}}\quad 
	\subfigure[Transfer throttled to 1MB$/$thread]{\includegraphics[width=.5\textwidth]{figures/kport1Mbk4.eps}\label{kport:th}}}
	\caption{K-port implementation throughput k=4. \underline{Blue}: Realtime throughput $y_{r}(x)$. \underline{Red}: Rolling sum $y_{sum}(x) = \displaystyle\sum\limits^{x}_{i=0}{y_{r}(i)}$}
	\label{kport}
\end{figure}

Throughput of the k-port implementation with $k=4$ in Figure \ref{kport}. Subfigure \ref{kport:unth} shows the throughput of the unthrotled systems, in blue.
The blue curve represents the a discrete integral of the data samples in 1 second windows. Total link capacity of the test system is $100Mb$, 
and as the plot shows, the 4 thread saturate the link.  Subfigure \ref{kport:th} shows the throughput of the system throttled to $1MB/s$ per thread. 
In this case the total system throughput under the k-port model is $4MB/s$. Indeed as shown in Subfigure \ref{kport:th} the aggregate throughput
never exceeds $4MB/s$. In this case the link bandwidth is significantly higher then the 
bandwidth constrained by the k-port, $100Mb/s$ vs $36Mb/s$. This causes the $k-port$ threads to quickly use up their allowed slice of bandwidth and sleep
for the remainder of the time-slice. This is evident in the plateau in the throughput integral shown in blue in Subfigure \ref{kport:th}.	

Inter-process communication, out of band with data propagation, is supported in P-land. A map of string key-value pairs is synchronized across all 
computing nodes. Plugin tasks are able to request and set values in this map for communication through an internal API. This also creates a uniform
way of controlling processing plugin parameters and monitoring internal plugin state.

\begin{figure}[h!]
	\centering
	\includegraphics[width=0.6\textwidth]{figures/QueueManager.eps}
	\caption{QueueManager monitoring a real-time audio processing app.}
	\label{qmanager}
\end{figure}

\begin{figure}[h!]
	\centering
	\includegraphics[width=1\textwidth]{figures/MetricProfile.eps}
	\caption{Performance penalty in throughput monitoring. \underline{X} axis represents the number of threads connected to a source and sink queue. \underline{Y}
	axis represents a total throughput on the unit plugin(See Section \ref{system::performance})}
	\label{qmanagerperf}
\end{figure}

All internal components from the k-port implementation to the data links between task plugins use the same synchronized queue implementation. This makes
these queues an attractive place to extract system performance metrics. A QueueManager API allows P-land to place any of its internal queues into a monitoring
structure accessible via network. Figure \ref{qmanager} shows QueueManager monitoring queue occupancy a real-time audio processing application. In this application
a network OGG stream is decoded and passed for processing to a scaler task, an echo generation task and finally to the playback task, which requires 
3 intermediary queues. QueueManager API also allows for monitoring throughput of each queue, and amount of time threads spend waiting for data, at a $5\%-10\%$
performance penalty, as shown in Figure \ref{qmanagerperf}.

\section{System Performance}
\label{system::performance}
Some of the internal performance of P-land has been evaluated, and is shown in Figure \ref{performance}. The test machine is a 6 core
AMD 1055T CPU equiped with 8 GB of RAM. The processing $unit\;plugin$ is an implementation of bubble sort which randomized its input. 
\begin{figure}[h!]
\centering
	\includegraphics[width=1\textwidth]{figures/Metrics.eps}
	\caption{System benchmark. Clockwise, starting from top left. $a)$ Parallel scaling, performance per thread. $b)$ Parallel scaling, total performance. $c)$ Latency in serial scaling. $d)$ Throughput in serial scaling.}
	\label{performance}
\end{figure}

The top two plots show parallel scaling of each plugin. In this case $n$ unit plugins are place between the source and the sync task to benchmark the parallel throughput scaling
of the system. As displayed the plot \ref{performance} $(b)$ , the system shows linear throughput growth while the number of tasks $n\leq6$.
At $n=6$ all CPUs are in use, maximum performance is reached and adding more tasks threads no longer increases throughput. Plot \ref{performance}
$(a)$ depicts the throughput per thread for each $n$. One would expect that the throughput per thread would remain constant while $n\leq6$, however,
due to the serial nature of the inter-plugin queue, some of the computation time is spent waiting on appropriate semaphore as a number of threads increases.

Plot \ref{performance} $(c)$ show the serial latency scaling. In this benchmark unit plugins are chained in series between the source and sync tasks. 
Plot \ref{performance} $(c)$ shows the propagation time of a single data item through the chain as a function of number of processing plugins $n$ in that chain. 
This benchmark shows that the latency increasing linearly as one would expect.

The final plot \ref{performance} $(d)$ shows the scaling of the serial throughput. The unit plugin arrangement remains the same with the serial latency scaling test,
however instead of measuring the propagation time of a single data item, source task produces data as quickly as the processing chain is able to consume them. 
The total throughput is then plotted as a function of plugin task count $n$. Ideally one would expect this plot to show constant throughput for $n\leq7$. However due to the 
thread-safe nature of the data queues even with number of threads in the chain $n<6$, some of the execution time is wasted while waiting on appropriate locks.
This is made evident in Figure \ref{performance4} which represents the serial throughput along with the average time each thread spent waiting for data.
For example at $n=6$ a thread waits on average $400ms$, which at the rate of $160packets/s$ results in a net performance loss of $64packets/s$.
Given that the peak performance, or performance with a single unit plugin chain, is $220packets/s$ we can attribute all of the performance loss to pipeline stalls.

When the chain  in Figure \ref{performance} $(d)$ reaches $n>6$ unit plugins, a sharp drop in throughput is observed, since the pipelined parallelism of the system can no longer be maintained,
and $n-6$ tasks have to be executed sequentially.

\begin{figure}[h!]
\centering
	\includegraphics[width=1\textwidth]{figures/Metrics4Detail.eps}
	\caption{\underline{Top}: Throughput in serial scaling. \underline{Bottom}: Average time each thread spends waiting for data.}
	\label{performance4}
\end{figure}

Internal performance of P-land is showing typical characteristics of a shared memory producer consumer systems. Additionally k-port implementation is shown
to function in accordance with the model it implements. More system performance need to be collected and verified, before P-land can be fully characterized.
This mainly entails integration of the k-port and main P-land code base. Once the code base is merged additional metrics pertaining to P-land scaling in
heterogeneous memory systems will be collected.Once the performance characterization of P-land is complete, experiments with streaming workflow
scheduling will be conducted.
