\chapter{Software concepts}
\label{chapter:software}
The path of data from the wire to a persistent storage is laden with different
software boundaries and concepts. These can be roughly divided to:
\begin{enumerate}
  \item Network protocols transferring the data.
  \item Sockets as gateways between the network and volatile memory.
  \item Transferring data between volatile and non-volatile memory.
  \item File systems as non-volatile memory.
\end{enumerate}
In this chapter we go examine alternatives and design considerations for each
step. In addition to performance, the qualifying factor for an alternative was
also its probable life span, which means how probable it is that the same
alternative can be usable in the future without added development effort.

\section{Network scenario and correlation}
The advantage of VLBI is that a dish is emulated that is of the size of the
baseline between the antennas. The dishes can be separated by thousand of kilometers.
This inherently sets the network scenario to
include large distances and so also increases the round trip time (RTT) of data
transfers. Resolution is also affected by the measurement bandwidth, so more
data bandwidth results in higher resolution data. This forces our network
scenario to be a long fat network (LFN).

In a correlation, each stations data needs to be available at a single location,
where data from same time spans are compared to correlate a result through
processes not described in thesis. The relevant part is that data needs to be
made available at a high bandwidth with matching time spans.

There are numerous on the network protocols described here and for more in-depth
information I recommend \cite{tcpip}, in which are described all the protocols in
this section.

\subsection{Transmission Control Protocol}
\label{ref:tcp}
Transmission Control Protocol (TCP) is the default transport layer protocol
between nodes on the Internet. It hides packet loss and reordering of packets
from applications and avoids congestion by keeping track of acknowledgements.
TCP also has states as it is connection oriented.

A simple description of TCP is where the sending side has a window of packets
that are en route to the destination. The size of the window determines how many
packets can be en route at a time. Packets are cleared from the window when
an acknowledgement is received for each packet. The classic algorithm for TCP is
Reno, which has a
small initial transfer rate that climbs up to match the bandwidth to
the destination. If a packet is lost, the rate is dropped dramatically and the
slow start is started again. \ci{tcpreno}

Although a very well functioning protocol for open Internet usage,
the problem in our domain with TCP is that it has a history of not performing
optimally with LFNs. \ci{jacobson1988tcp}.

With a large RTT, achieving the networks full capability takes longer,
since the sending side has longer iteration times when it tries to increase its
window size. Combine this with the possibility of a few packets dropping will
result in a zig-zag shape demonstrated later in \ref{lrtcp}

Many of the older problems were addressed in \ci{jacobson1992tcp} from which
spawned new TCP-algorithms to replace the Reno-algorithm. The default on
Linux systems currently is Cubic described in \ci{ha2008cubic}. Tests like
\ci{tierney2012efficient} with modern TCP algorithms show promise in total 
bandwidth even with 40GE network, though the study had relatively fast path of
49ms. Similar tests with a very long baseline of 32372km show that TCP will
start using the bandwidth optimally, but only after a fairly long ramp up time.
The problem with this study is the use of Linux kernel 2.6.12, which is fairly
old and most likely still using the old Reno-algorithm.
\ci{yoshino2007analysis}

An important advantage with using TCP in a correlation process is its inherent
and automatic buffering and rate control. The protocol stops sending when the
readers buffer is full and vice versa. In the correlation phase the
correlator needs each stations data at the same rate and buffering this data
would force buffering mechanisms to the receiving side, or conversely rate
throttling to the sending side.  For a scenario where multiple stations are
sending to a correlator with different network paths and capabilities, the
TCP streaming rate from each station would converge to a shared minimum without
added software complexity.

\subsubsection{TCP with multiple streams}
\label{tcpmultistream}
In addition to the already listed problems and improvements for TCP, there is
also the possibility of using multiple TCP streams instead of one. This
mitigates the problems of long fat networks explained in \ref{ref:tcp} and
in essence divides the LFN
into multiple virtual thinner networks. Also the losses in speed when cutting
the individual streams speed by half is mitigated considerably.

The problem then becomes that of how to speed up individual transfers with
this method. Since mostly dealing with data that is divided into packets,
the data stream can be divided by the packets and then distributed into an
arbitrary amount of TCP pipes. The only requirement is that both ends know how
many TCP connections there are, so they know which packet belongs to which spot.
This method could be named single data, multiple streams. The study \cite{cascaded_tcp}
referred to it as cascaded TCP.

\subsection{User Datagram Protocol}
User datagram protocol (UDP) is a transport layer protocol that has only header
fields for source port,
destination port, length and a checksum. This means it does not resequence the
packets or resend lost packets. Another way of putting it: It lets a higher
level application take care of data loss and reordering. UDP also does not take
congestion into account and so can cause congestion collapses on nodes with
heavy load. \ci{braden1998recommendations}

UDP is the main workhorse in observations and important in the VLBI-streamer
software itself. This itself poses a challenge with rate limiting. As there is
no inherent rate limiting in the UDP-protocol, the sending side has to take the
network capacity of the full path into consideration. Since recording machines
themselves are locally connected to high speed network switches, having them
take only the first network hop into consideration would most likely drop most
of the traffic as packet loss in some hop in the network connection.  So anyone
sending UDP-traffic has to take the paths weakest bandwidth into consideration.

Sending UDP packets at a constant rate is not a trivial task. It was quickly
noticed in the developing of VLBI-streamer that simply sleeping until the next
packet should be sent is not as straightforward as it seems as the default
non-preemptive Linux scheduler only wakes threads during scheduling intervals
\cite{op_systems}. These problems are solved in section \ref{subs:udp_receiver}.

\subsection{Stream Control Transmission Protocol}
Stream Control Transmission Protocol (SCTP) is a sort of a hybrid of TCP and
UDP. Since TCP abstracts the sending of data to a point where the only
consideration is the number of sent bytes, there are no clear packet boundaries
in TCP. SCTP adds this packet boundary consciousness. Packets are also kept in
order, which makes SCTP connection oriented. SCTP Is only introduced here, but not
used in VLBI-streamer or in the observations themselves.
\section{From the network to main memory}
A simplification of a packet receive is as follows:
\begin{enumerate}
  \item A packet is read from the wire into one of the network cards internal memory queues
  \item The network card generates an interrupt to signal the operating system of a new packet
  \item The interrupt handler eventually copies the packet to the correct kernel
    space socket buffer, which was reserved for the receiving program
  \item The receiving program copies the packet from the kernel space buffer to
    its own user space buffer.
\end{enumerate}
\subsection{Sockets}
Sockets are endpoints of an inter-process communication flow. There are three
types of socket types, each of which was used during this thesis:
\begin{itemize}
  \item Datagram sockets or connectionless sockets
  \item Stream sockets or connection oriented sockets
  \item Raw sockets
\end{itemize}
From a software developer perspective, sockets are reserved kernel memory
spaces, to which the Operating System (OS) copies packets. The packets are bound
to specific sockets according to their port numbers. The packets can be read to
user space with system calls. If the buffer is full and more packets arrive, the
extra packets are dropped and the kernel registers this as packet loss.

For more in-depth information on the operating systems handling of network
related functionality, I recommend \cite{op_systems}.

\subsection{Packet Memory Map}
\label{sec:mmap}
The Linux kernel offers in addition to traditional sockets packet memory map
sockets. These specify a ring-type memory area into which the kernel can
directly write packets to. After initialization the interface operates in
promiscuous mode, which means it will capture all packets arriving to the
interface. In practice this requires the user space program to poll for events
in the memory mapped socket, after which it processes the packets and marks them
as free to be used for more packets arriving for the network.\ci{mmap}

Also if the socket is created with AF\_PACKET and PACKET\_FANOUT, the packets
will be spread evenly to threads which have registered to use the interface
\ci{fanout}.

\subsection{PF\_RING}
There is also a custom packet capture module and driver named PF\_RING described
in \ci{garcia2013high}. Since the module had not made its way into the
Linux kernel mainstream, there was concern that it could die out and take any
software built upon it with it. Also it might not offer drivers for a specific
network card or might force the use of an older kernel. Again it is not ruled
out as a module can be easily added for it in to \vbsdot

\subsection{Splicing}
\label{sec:splice}
Linux user space can make use of splicing, which is data transfer between
kernel memory pipes. A feature that caught my attention is giving splice-commands
flags, which enable it to move data between locations without copying. This means
that the packets could be copied from their kernel space socket buffer efficiently
directly to disk. \ci{splice}

Splicing is currently only supported for TCP \ci{splice_to_disk}. Also for our
purposes, splicing from the socket would require that the write point would have
to keep up with the receiving process. This in turn means that \vbs would be
forced to a raid-solution with fast enough write speeds. Also in redundant array
of independent disks (RAID) systems it would be more optimal to perform large
writes, as they can be spread to more disks due to stripe size and subsequently
utilize more of the disks bandwidth.

It should be noted though that memory mapped spaces can have a file descriptor
associated to them. This means that packets could be spliced directly to memory.
There is some evidence that direct splicing is superior to manual send or
write-commands \ci{tierney2012efficient}.

Splicing UDP-packets to the network is supported, but with it the packet size
cannot be properly determined.

\section{From main memory to non-volatile storage}
\label{sec:writetodrive}
As the data is in memory, it is organized in so called memory pages of power of
two in size. Operating systems often have extensive caching systems for reducing
hard drive transactions. Since \vbs handles its on caching, such caches will be
avoided.
\subsection{Virtual file system}
\label{sec:vfs}
The Linux virtual file system (VFS) provides a disk cache named the page cache for keeping pages in
memory which are used regularly. These pages are only flushed on request or when
the system runs out of usable memory. In addition all
written pages are copied to the VFS for efficient combination of writes and
sharing of pages between processes. In this application, this does not serve much
purpose and also causes an extra memory copy. Since its desired to utilize the maximum
disk write bandwidth from the start of a recording, keeping pages in memory
delays the data write and can cause a jerk when memory goes low enough for the
writes to begin: The memory is suddenly full of pages that need to be written to
disk and processes are blocked from getting its share of memory until the pages
are written.

The write to disk can forced though, but it still might not release the memory
that was allocated for the cache and does not remove the extra memory copy. \ci{bovet2008understanding}
\subsection{Direct I/O}
\label{ddstuff}
The GNU/Linux user space also provides a DIRECT\_IO flag, which can be specified
when opening a file. This flag will skip all page caches and write the data
directly to disk. There is a requirement though: All writes must be multiples of the
block size, which is traditionally 512 bytes and 4096 bytes in newer mediums.
\ci{open}

Due to this requirement, many issues must be taken into consideration:
\begin{itemize}
  \item It might not be possible to write an integer amount of packets, as the
    byte count might not be divisible by the block size.
  \item Dummy data might have to be written at the end of a file.
  \item When writing to the same file, only n-packets can be written, where
    $n\times{\text{packet size}}$ is divisible by the block size. This means
    extra data not divisible by block size cannot be written, unless partial
    packets are written.
  \item Packets sequentially in memory cannot be stripped of header
    data without an extra memory copy, since the packet size minus bytes
    stripped most likely will not align.
\end{itemize}

A motivational graph for Direct I/O is shown in figure \ref{fig:ddplot}. As the data is
written to a raid, it benefits from using large writes that parallelize better to
the disk drives as the write is split into stripes. Without direct I/O, the
blocks are automatically grouped to larger writes as they are copied into the
VFS page cache. The downside of the copying is the CPU overhead of copying the
data. dd is a widely used Unix command line tool for copying data. During the writes the dd-process with direct I/O consumed only about 14\% of the
cycles of a single CPU core, when the non-directs consistently took 100\% of the
cycles of a core and so was probably bottlenecked to a single cores performance.

\begin{wrapfigure}{r}{0.5\textwidth}
  \scalebox{\graphwidth}{\input{dd_plot.tex}}
  \caption{Write speed to 14 disk software raid 0 with 4096MB file from memory}
  \label{fig:ddplot}
\end{wrapfigure}

The first dd command without reverse order performed better with smaller writes,
but this was most likely due to still having free memory at the start of the
test run, which did not cause cycles to be used on flushing the memory. For
confirmation, the test was restarted with a reverse order, so first the very
large block sizes were tested.

\section{VLBI backends}
\label{vlbibacks}
Since the project was aimed at recording astronomical data, some parts of the
recording process should be explained. After the signal is focused to the
receiving unit, it travels through a shielded cable to a digital back end. The
digital back end does base band conversion and sampling of the data. The end
results is an array of power values divided by band and time.
\subsection{DBBC and FILA10G}
The Digital Base Band Converter (DBBC) developed by Gino Tuccari et al. can be
connected with VSI cables to a FILA10G boards which convert the antenna data
into network packets. FILA10G uses stateless UDP traffic\ci{tuccari2010dbbc2}.
The conversion from a data stream to network packets is usually done by field
programmable gate arrays (FPGA) that fit multiple data samples to a packet and
send it to the network.
\section{Data formats}
\subsection{Mark5b}
The mark5b data format is specified by \ci{mark5bformat}. The relevant bits
are:
\begin{itemize}
  \item Frame number within second
  \item Julian day
  \item Second of day
  \item Fraction of second
\end{itemize}
When the receiving end gets a full seconds worth of data, the number of frames
per second is known. After this the frame number and second constitute an index
for the data by which the amount of missing data and the location of
out-of-order packets can be determined. Each header combined with a payload
constitute a 10016 byte frame.
\subsection{FILA10G}
\label{sub:fila10g}
For network transfer the rather large 10016 byte mark5b frames are split into two
5008 byte frames and the FILA10G adds a 32-bit filler and 32-bit counter in
front of each frame. This means the network packets are 5016 bytes.
During the spring of 2012 most correlator software could only process the mark5b
frames without the net frame. For the receiver at the correlator this means it
has to strip away the extra bytes from each packet.

The FILA10G had limited network functionality during the spring of 2012. It was
tested by the author to generate a 4Gb/s UDP data stream correctly and its
documentation showed at least the ability to configure 8Gb/s UDP streams.

Some information on FILA10G can be scraped from \ci{fila10g}, but most parts were
discovered through experimenting with the hardware itself.
\subsection{VLBI data interchange format}
VLBI data interchange format (VDIF) is a stream-based packetized data format,
which is meant to standardize VLBI data storing. The relevant bits are the
second from epoch and data frame number withing second, which can be used to
resequence the data stream. \ci{vdif}
\section{Related work}
VLBI data recording is not a new problem. Its history is mostly dominated by the
Mark-series. There was also a new arrival in 2012 called Xcube.
\subsection{Mark series}
The mark series is a set of VLBI-data recording devices, which started with
modified tape drives, moved to hard drives and is currently somewhat of a
standard in the VLBI-community with the Mark5 disk recorders. The Mark5 series
has banks of disk drives, which are controlled by StreamStor disk interface
cards. The series from Mark5A to Mark5C supported input modules from VSI to
Ethernet. Though sold as separate hardware, the machines are mostly COTS
components.
\subsection{Mark6}
Mark6 is the newest in the VLBI recording series. The design has 4 bays for
hard drives, which are each dedicated to its own NIC resulting in a total
maximum recording rate of 16Gb/s. All the hardware is COTS, but with a different
casing. The software is open source.\cite{mark62}

Browsing the source code\ci{vdas} in the autumn of 2011 showed that the
software is using the PF\_RING driver for packet capture. This could also be
deduced from the 4Gb/s limit on individual network cards, as the interrupt rate
spikes beyond that rate. The implementation uses only the PF\_RING aware
drivers, which might be to ensure the network card has its regular network
capabilities still working.
\subsection{Xcube}
Xcube is a modular data recording and analysis platform, which is used in the
automotive industry but is marketed also to the VLBI-community. The system is
quite similar to Mark6. \ci{xcube} 

Additional features are Burst mode, which offers larger
speed recording than is supported by the disk system for a limited amount of
time. This is most likely simply using memory buffers that allows some
flexibility in recording. The receive medium can be chosen freely, so the design
is most likely modular. During a visit in the spring of 2012, a staff member
informed that they were using the PF\_RING aware driver.
