\chapter{\vbs}
\label{chapter:vlbistreamer}
\section{Overview}
\vbs is a software designed by the author in collaboration with the designers of
Flexbuff. Development was done mostly between 2011 and 2012 by the author. \vbs
can record and send high speed network packet streams.
The main focus of development was the recording and storing of radio astronomy
sessions and the subsequent sharing of those sessions.
\section{Design principles}
\subsection{The Linux Kernel and hardware}
Linux is a powerful operating system kernel which is widely developed. In
addition to all the benefits it facilitates it also restricts certain data paths.
The need for high speed transfer of data from network to disk calls for 
the minimization of memory copies en route to disk, but the kernel forces
kernel/user space separation and so causes extra memory copies. 
\subsubsection{Packet receiving}
As described in Chapter \ref{chapter:software} the
kernel copies the packet from the network cards memory to a socket buffer which
is then copied with the for example the recv-command to a user space buffer.
The corner cut version would be the copying of packets straight from the network
card memory to disk with a direct memory access (DMA) operation. This could be
done by writing or augmenting a network card driver. The openness of the Linux
kernel permits this sort of augmentation to the drivers and the kernel.

With limited development time, \vbs would be restricted to a single or
few network cards, which would cause the software to only run on those few cards. 
This would also mean more software development costs when developing
the software to run on the next generation hardware. Also all ongoing
development with the existing drivers and receiving would be either lost or
would need manual integration with our code base. There would most likely be
issues with using the card for normal network operations with the custom driver,
not to mention the security risks.

Weighing all these toils together, the few extra memory copies start to look
more appealing. Especially if its desired that the software lasts over hardware
generations.
\subsubsection{Hard drives}
\vbs is not tied to any specific hard drive or other non-volatile media. The
current implementation is geared toward utilizing large sequential writes and
avoiding simultaneous operations to single targets. The non-volatile media
targets are set as reservable resources, which are used by the buffer elements
to read or write their data on. They can also be used as queues, for
example when a certain file is required by a buffer entity, it will queue itself
to the drive and be woken up when the resource is free to use again.
\section{Architecture}
\label{sec:archi}
\begin{figure}
  \begin{center}
  \tikzstyle{free}=[draw=green!50,fill=green!20]
  \tikzstyle{busy}=[draw=red!50,fill=red!20]
  \tikzstyle{loaded}=[draw=blue!50,fill=blue!20]
  \tikzstyle{active}=[draw=black!50,fill=black!5]
  \tikzstyle{neutral}=[draw=black!50,fill=black!5]
  \tikzstyle{element}=[rectangle, rounded corners]
  \tikzstyle{buffer}=[rectangle,draw,fill, rounded corners, scale=0.7]
  \tikzstyle{main}=[circle,draw,fill, scale=0.5]
  \tikzstyle{ttext}=[font=\tiny]
  \begin{tikzpicture}
      \node[element,neutral]	(scheduler)	{Scheduler};
      \node[element,neutral]	(membuf) [right of=scheduler,xshift=3cm,yshift=3cm]	{Memory buffers};
      \node[buffer,free]	(afre0)	[below left of=membuf,xshift=-0.8cm,yshift=-0.5cm] {Buffer}
      edge [-] (membuf);
      \node[buffer, free] (afre1)	[below of=afre0,yshift=0.7cm] {Buffer};
      \node[buffer, free] (afre2)	[below of=afre1,yshift=0.7cm] {Buffer};
      \node[buffer, free] (afre3)	[below of=afre2,yshift=0.7cm] {Buffer};
      \node[buffer, free] (afre4)	[below of=afre3,yshift=0.7cm] {Buffer};
      \node[buffer, free] (afre5)	[below of=afre4,yshift=0.7cm] {Buffer};
      \node[buffer, free] (afre6)	[below of=afre5,yshift=0.7cm] {Buffer};
      \node[buffer, free] (afre7)	[below of=afre6,yshift=0.7cm] {Buffer};
      \node[buffer, free] (afre8)	[below of=afre7,yshift=0.7cm] {Buffer};
      \node[buffer, free] (afren)	[below of=afre8,yshift=0.7cm] {Buffer};
      \node[buffer,loaded]	(alod0)	[below of=membuf,yshift=-0.5cm] {Buffer}
      edge [-] (membuf);
      \node[buffer, loaded] (alod1)	[below of=alod0,yshift=0.7cm] {Buffer};
      \node[buffer, loaded] (alod2)	[below of=alod1,yshift=0.7cm] {Buffer};
      \node[buffer, loaded] (alod3)	[below of=alod2,yshift=0.7cm] {Buffer};
      %edge [-] (alod8);
      \node[buffer,busy]	(abus0)	[below right of=membuf,xshift=0.8cm,yshift=-0.5cm] {Buffer}
      edge [-] (membuf);
      \node[buffer, busy] (abus1)	[below of=abus0,yshift=-0.1cm] {Buffer}
      edge [-] (abus0);
      \node[buffer, busy] (abus2)	[below of=abus1,yshift=-0.1cm] {Buffer}
      edge [-] (abus1);
      \node[buffer, busy] (abus3)	[below of=abus2,yshift=-0.1cm] {Buffer}
      edge [-] (abus2);
      \node[buffer, busy] (abus4)	[below left of=abus3,yshift=-0.1cm] {Buffer}
      edge [-] (abus3);
      \node[buffer, busy] (abus5)	[below right of=abus4,yshift=-0.1cm] {Buffer}
      edge [-] (abus4);
      \node[element,neutral]	(recpoints) [right of=membuf,xshift=3cm]	{Recpoints};
      \node[buffer,free]	(rfre0)	[below right of=recpoints,xshift=0.6cm,yshift=-0.5cm] {HD}
      edge [-] (recpoints);
      \node[buffer, free] (rfre1)	[below of=rfre0,yshift=0.7cm] {HD};
      \node[buffer, free] (rfre2)	[below of=rfre1,yshift=0.7cm] {HD};
      \node[buffer, free] (rfre3)	[below of=rfre2,yshift=0.7cm] {HD};
      \node[buffer, free] (rfre4)	[below of=rfre3,yshift=0.7cm] {HD};
      \node[buffer, free] (rfre5)	[below of=rfre4,yshift=0.7cm] {HD};
      \node[buffer, free] (rfre6)	[below of=rfre5,yshift=0.7cm] {HD};
      \node[buffer, free] (rfre7)	[below of=rfre6,yshift=0.7cm] {HD};
      \node[buffer, free] (rfre8)	[below of=rfre7,yshift=0.7cm] {HD};
      \node[buffer, free] (rfren)	[below of=rfre8,yshift=0.7cm] {HD};
      \node[buffer,busy]	(rbus0)	[below left of=recpoints,xshift=-0.4cm,yshift=-0.5cm] {HD}
      edge [-] (recpoints)
      edge [<-, bend right] node[ttext,yshift=0.1cm] {Writing} (abus0);
      %edge [post] {Writing} (abus0);
      \node[buffer, busy] (rbus1)	[below of=rbus0,yshift=-0.1cm] {HD}
      edge [-] (rbus0)
      edge [<-, bend right] node[ttext,yshift=0.1cm] {Writing} (abus1);
      \node[buffer, busy] (rbus2)	[below of=rbus1,yshift=-0.1cm] {HD}
      edge [-] (rbus1)
      edge [->, bend right] node[ttext,yshift=0.1cm] {Reading} (abus2);
      \node[buffer, busy] (rbus3)	[below of=rbus2,yshift=-0.1cm] {HD}
      edge [-] (rbus2)
      edge [<-, bend right] node[ttext,yshift=0.1cm] {Writing} (abus3);
      \node[main,active] (receiver) [below right of=scheduler, yshift=-2cm,xshift=4cm] {Data Receiver}
      edge [->, bend right] node[ttext,yshift=-0.3cm, xshift=+0.1cm] {Receiving packets} (abus4)
      edge [<-, bend left, dotted] node[ttext,yshift=0.2cm,xshift=-0.4cm] {Grab new} (afren)
      edge [-, dotted] node[ttext,yshift=-0.3cm,xshift=-0.1cm] {Timed start} (scheduler);
      \node [ttext] (socket1) [below of=receiver, yshift=-1cm] {Socket};
      \draw[snake=triangles] (socket1) -- (receiver);
      %edge [->] (receiver);
      \node[main,active] (sender) [below right of=scheduler, yshift=-2cm,xshift=14cm] {Data Sender}
      edge [<-, bend left] node[ttext,yshift=-0.3cm, xshift=+0.1cm] {Sending packets} (abus5)
      edge [->, bend left,dotted] node[ttext,yshift=0.0cm,xshift=-0.60cm, near end] {Grab next} (alod3);
      \node [ttext] (socket2) [below of=sender, yshift=-1cm] {Socket};
      \draw[snake=triangles] (sender) -- (socket2);
      %edge [<-] (sender);
    %\foreach \x in 

    %\node[entity,free] (afre) [below of=membuf] {Buffer};
  \end{tikzpicture}
  \begin{tikzpicture}
  \node[draw=green!50,fill=green!20,scale=1.0,xshift=0.3cm,yshift=-0.3cm] (hur1)
  at (current page.north west) [label=right:$free$] {};
  \node[draw=red!50,fill=red!20,scale=1.0] (hur2) [below of=hur1,yshift=0.5cm] [label=right:$busy$] {};
  \node[draw=blue!50,fill=blue!20,scale=1.0] [below of=hur2,yshift=0.5cm] [label=right:$loaded$] {};
  %\node[draw=yellow!50,fill=yellow!20]
  \end{tikzpicture}
\end{center}
  \caption{VLBI-streamer architecture. The scheduler starts receivers and senders. Receivers reserve free buffers from the queue as buffers filled by the receiver write themselves to available hard drivers. Senders order a number of buffers to fill themselves from disk and send them sequentially to the network.} \label{fig:M1}
\end{figure}
\subsection{Active file index}
\label{subs:afi}
As the software might be receiving a stream that it wants to send simultaneously
for correlation, a central data structure and access interface for file meta 
data is required. The active file index handles loading and saving of file
meta data thread safely. This way active recordings can be mirrored to
multiple sites and the transmission medium changed. This way a UDP stream
from a FiLA10G can be received, while simultaneously sending it to a remote
location with TCP-packets.
\section{Modularity}
Structuring software as modular can give much better results in software
quality \ci{modularity}. There are also many characteristics in this project
that suggest a modular approach. As explained in \ref{ref:tcp} TCP does not
utilize a networks bandwidth optimally in some scenarios. This is why the
software sports a modular send and receive side supporting UDP, TCP and
multi streamed TCP described in section \ref{tcpmultistream}. The transfer between
volatile and non-volatile memory is also modular with different capabilities
described in section \ref{writebacks}.
\subsection{Back ends for network}
The network sides access to the memory buffers consists of asking for buffers
and their memory space and then releasing so their threads can write themselves
to disk. Other than that, the module for networking can operate quite freely.
Using the modules is split to three phases: Initialize, run and close.

As existing packet packers like FiLA10G use stateless UDP packets for data
transfer, the first back end developed for receiving sending was a UDP 
packet receiver.
\subsection{UDP packet receiver}
\label{subs:udp_receiver}
The UDP packet receiver creates a simple SOCK\_DGRAM socket and mostly only
tunes its size in the kernel memory space to the maximum size allowed. The
operational parameters are a port and time spent recording or a target for
mirroring, which means re-sending the packet to network immediately to a third
target.  Although a very simple solution in subsequent tests it showed to
perform very well and did not strain a multi core even with close to $10Gb/s$
line rate packet receiving.

The challenging part was developing sending side that could regulate the speed
at which it was sending. We cannot send UDP packets in a busy loop, since they
cause congestion at the intermediate network nodes, which with great certainty
will not have the same speed network connectivity due to the large distances
between stations. If for example the network card connected to the sending machine
can output 10Gb/s, but an intermediate link to the correlator has only 2Gb/s capacity,
one fifth of the data will be dropped at the choke point. There must be a wait time
between individual packet sends on the application level.

The wait time can be implemented with a busy loop or a sleep call.
In a non pre-emptive kernel with normal priority threads, 
the minimum amount slept was tested to be the schedulers rate. This translated
into about 50ms minimum sleep, which was too low for sensible transfer rates on a
10GE network. For example with 8888 byte packets: $\text{Rate} = \frac{\text{Size of
packet in bits}}{\text{Time between packets}} = \frac{8888*8}{5\times10^{-5}}\,
b/s \approx 1356\,Mb/s$

Due to the minimum sleep time, the implementation has an optional busy loop
waiter for systems without a pre-emptive kernel. This waiter has dire
performance consequences on systems with more sending processes, as each sending
process requires a core in busy loop. This means severe scalability problems
with multiple sending threads.

With a proper pre-emptive kernel, low latency scheduler timer and proper
real time priority the sleeping timer works as intended and provides an optimal
resource rate limiter for VLBI-streamer.

\subsection{RX-ring}
As described in \ref{sec:mmap} there is a ready load balancing packet capturer
capable of writing into mapped memory areas. At first this sounded like the
perfect solution for this project, not counting the forced capturing of all
packets. This was implemented and tested, but it showed very high interrupt rate
usage which resulted in packet loss on larger that $\approx5\,Gb/s$ data rates.

The module source code is still in \vbs source code, but is discontinued from
development and will most likely not work. There is a chance that this interface
might prove useful for some scenario or when it is developed to also use interrupt
mitigation techniques.

\subsection{TCP packet receiver}
As described in \ref{ref:tcp} TCP would have many advantages for the network
transfer towards correlation after the data stream has been recorded from a
back end.

Since all connections from observatories to the correlation sites are unique,
its not guaranteed that the data rates are uniform between the stations. The
correlation itself requires the same time slices from each station and a
delayed send from one stations would inadvertently delay the whole correlation
and require the other streams to be buffered while the delayed stream tries to
catch up. If a TCP-connection was used instead of UDP-packets, the streams would
be automatically slowed through the TCP-stack to the lowest rate sender by
blocking the sending thread.

There is no data loss or packet resequencing. Also in the event of a network
connection failure, the sending sides would be automatically notified of this,
disconnected and would stop their transfers. If there was a sudden unexpected
extra traffic in switches en route, TCP could automatically react to it
without extra coding efforts. The rates would be slowed and all other
connections would slow down also adjust but not fail altogether.

\section{Back ends for writing}
\label{writebacks}
The API for writing modules is quite simple. Upon acquiring a write element as a
resource, the write end will open a new file to which write the data or open an
existing file from which to read data from. The buffer will call the back ends
write or read function to transfer the data.

Each of these writers have different characteristics. For example as shown in
\ref{ddstuff} a writer using DIRECT\_IO requires considerably less CPU cycles,
while being restricted to the underlying block size writes. While the use of
less CPU cycles enables the recording of a larger bandwidth data stream,
the restriction can for example prohibit the manipulation of the received
data headers to make it compatible for a specific correlator as explained in
\ref{hdrstrip}.
\subsection{Default writer}
The default writer uses the default read() and write() system calls with the
DIRECT\_IO flag.
\subsection{Asynchronous I/O writer}
The asynchronous I/O writer (AIO) uses a Linux native libaio-library to query
all the writes and poll for completion. The motivation is the ability to query
large writes, which can fill the target drives caches efficiently for data
flows.
\subsection{Splice writer}
The splice writer tries to benefit from moving data between file descriptors
without needing to copy data between kernel address space and user address space.
During the winter of 2012 splice had a more limited support for connectionless
UDP sockets, which downplayed its importance for VLBI-streamer. The splice writer
in VLBI-streamer simply splices the file descriptor associated with the memory
mapped buffer into the persistent storage file descriptor and hints the file
system to write the memory area to disk.

Although splicing could have been used to transfer bytes between a network socket
and a persistent storage file descriptor, this would have tightly coupled the
disk write end to the network receive end without the ability to buffer the data.
This could have caused packet loss if the physical hard drive stalled, although
the kernels virtual file system explained in section \ref{sec:vfs} could have probably
handled the jolts.
\subsection{Writev writer}
The writev writer uses system calls to structure a write according to an array
of iovec structures. Iovec-structures allow for gathering output writes, where a
large batch of smaller memory areas are written in a single call. This is useful
when stripping the headers from the start of packets is required. It is problematic
though as the page cache is used, extra memory copies are made and the writes will surely not be the size of pages, which might cause data alignment overhead.
\ci{writev}
\section{Packet manipulation}
\subsection{Packet resequencing}
UDP streams are connectionless, lossful and do not guarantee that the data
arrives in the sent order. Therefore a mechanism had to be implemented into \vbs
to take care of resequencing packets. Also its important to fill in missing
packets with dummy data, as the packets from multiple stations have to aligned
between themselves so the correlation does not have to realign them. Looking into the
metadata is also useful for specifying a start time for data recording from the
data stream.

The resequencing algorithm copies the packet from the socket to its presumed
slot without looking at the metadata. If the packet is an earlier packet, it is
copied to its correct spot with. If a previously used buffer is missing packets
it is left to dangle on the receive end so the missing packets can be written to
it. If the packet arrived before it should have, the index is moved to that
packets position. This way keeping count is not required on which packets were
received and which not, since already received packets ahead
of our index will not be overwritten. Previous packets might arrive twice, but
it is not an issue.

Filling with the non-aligned packet with dummy data or setting it as non-valid
is done in the memory buffers, away from the critical receive process.
\subsection{Header stripping}
\label{hdrstrip}
As explained in \ref{sub:fila10g} \vbs has to occasionally manipulate the data.
Since writes with DIRECT\_IO require page size aligned data, this mode is only
supported on writev.
\section{Daemon mode}
The starting of a process that requires most of the system memory can be time
consuming. Also if multiple of such processes are desired to run on a single system, they
should be capable of sharing their resources, since they cannot both have most
of the system memory in their use. This is why \vbs was converted by the author
to run as a daemon process sharing its memory buffers and disks with multiple
receives and sends.
\subsection{Scheduling}
Instead of simply timing shell commands to start transfers, a scheduling system
was created as the base for spawning other threads. A scheduling thread takes care of
initializing all the resources and monitors a schedule file, from which it
schedules the data transferring threads for receiving or sending.
\subsection{Priority}
As explained in section \ref{introcots} we wanted to limit \vbs to run strictly in user space. It also required
priority settings higher than allowed for normal user space processes. The
natural way of combining these was as a regular OS service, which is started at
boot time. After the software starts, it sets its priority high enough and drops
its privileges.
\section{FUSE}
Filesystem in userspace (FUSE) enables mounting file systems in userspace. This
relaxes the requirements for developing a custom file system and has
proliferated many novel file systems, such as sshfs for mounting a file system
over ssh.\ci{fuse}

With this project FUSE was used to give the correlator single continuous files
from the split files it had received. This was developed, but with limited
development time proved a bit unstable. FUSE could have also been used to change
the data format on the fly without modifying the original data. For example
Mark5B network headers could have been changed to the regular non-network
headers. Also this could have been interfaces with VLBI-streamer to utilized
shared memory and accelerate the read process itself without the need for a
raid.
\subsection{Read acceleration}
The correlating programs have a slow start, which is caused by pre-checks that
try to determine the common starting point and clock skew between the
recordings. If the correlator were given a number of files instead of one, it
would have to do the pre-checks on each file separately and the correlation run
would be a lot longer. Although \vbs supports writing to a single file, there
is still reason to organize the correlation read through VLBI-streamer. If the correlator
would start by reading small blocks on disks which are being written to, disk seeks
would increase and total throughput would dramatically drop.

By using shared memory and communicating through local domain sockets, \vbs can
serve the needs of a reading process by accelerating and policing the read
process. Since the correlator will sequentially read the whole recording, it
makes sense to treat it similarly to a send process. Even most of the modules
can be re-used for this purpose. Currently the \vbs FUSE file system does its
own reading through the mountpoints.

\subsection{Data manipulation}
\label{datamani}
Since there are pre-definable functions between the reading process and the \vbs
buffers, the data read can be manipulated to different formats. Take the header
stripping process described in \ref{sub:fila10g}. The header stripping can be
done and is already implemented on the FUSE level with each recording being stripped of the headers without taking a performance toll on the receive process.

When developing \vbs it soon became apparent that using DIRECT\_IO and byte
stripping would not work well together. DIRECT\_IO requires block size writes
while byte stripping breaks up the data writes from memory to disk into packet
size minus the length of bytes stripped segments. There would be several ways to
go around this problem:
\begin{itemize}
  \item Copy the data from the socket first to a temporary buffer, from which
    transfer only the requested amount to the actual buffer to be written do
    disk.
  \item Copy the packet segments to another memory buffer which will be written
    to disk without the overheads
  \item Receive each strippable part to a fake buffer and the data to the real
    buffer.
\end{itemize}
During development, no correct way was decided upon and so no implementation for
DIRECT\_IO writers with byte stripping was developed.

