\chapter{Introduction}
\label{chapter:intro}
\section{Data recording in radio astronomy}
Continued advances in digital signal processing enable radio astronomy instruments 
to generate increasingly higher resolution measurements. Increasing the sample rate
allows the receiving of a larger bandwidth. These advances inevitably also increase
the data rate at which the data is generated. This poses challenges on handling
the bandwidth and volume of the data. Data sources generate continuous
multi-gigabit streams like the ones described in \ref{sub:fila10g}. Any
recording system receiving these streams must be able to continuously sustain
the sent data rate or suffer packet loss, which results in data loss and
possibly a botched observation.

Multiple stations can observe the same target at the same time. When this data is
combined or correlated as it is called, the results emulate a dish
the length of the distance between the stations. These dishes with diameters up to a
hundred meters can be individual dishes separated by whole continents or a collection
of similar dishes within visual range of each other. The former scenario is the more
relevant one for this thesis, as it adds the requirement of
distributing the high data rate data sets of geographically separated stations.

Recording astronomical data was started with tape drives and hardware correlators
but has moved to spinning hard disks combined with software solutions and will
most likely someday move to solid state drives and fully to software
correlators. Distributing the data has also moved from shipping physical disks
to data streams over network connections. The physical connections between the
devices has also changed from bulky custom cables to basic network connections,
from which the VLBI community benefits as the network industry is being heavily
developed.

If there are consistent requirements in the research sector, they are affordability and
compatibility. Anything saved in facilitating the data can be invested in better
instrumentation for generating the data itself. Also building a solution around
a single data type would restrict the solutions lifespan to that
particular type of data.

\section{A commercial off-the-shelf solution}
\label{introcots}
The requirements explained previously form the basis of this thesis. The work itself is
divided into a hardware and a software solution. The former of which is a collection
of hardware components recommended for housing an efficient data recording system, which we will
call \flex. The latter is the software running on the Flexbuff, which was named \vbsdot
\vbs is freely available open source software developed by the author and
available at \url{https://code.google.com/p/vlbi-streamer/}

As the IT industry drives the development of Commercial off-the-shelf (COTS)
hardware, it stands to reason to develop custom hardware solutions for
components.  A mold for a recording element was sketched and called
\flex\ci{wp81}. Individual components are surely replaced by each successive
generation, but the overall component types will probably remain similar. There
will be slow non-volatile memory, faster volatile memory and some sort of
interconnectivity between machines.

Assuming the evolution of underlying hardware is driving our instruments
forward, software developed for a custom hardware platform would not survive
competitive in a longer time period. This would require redevelopment of the
software on every hardware generation. The software \vbs was developed to
abstract away the hardware it is running on, but still work efficiently with the
hardware and allow advanced features when available.  As the development is done
on a Linux system, there is a lot of benefit to be had from advances in kernel
development.  This also hints that development should be restricted to user
space without creating custom kernel patches, which would again require
redevelopment and maintaining or the software would be bound to a moribund
kernel.

The challenge of large distances between observing stations is met by the increase
in network connectivity globally. To make use of this connectivity, \vbs aims to
use the common network protocols efficiently, while also allowing new solutions
into its architecture.

The use of \flex and \vbs is not restricted to radio astronomy. There are
startups selling solutions for data acquisition, which have customers in for
example the auto industry \ci{xcubesite}. The main focus is capturing sensor
data at high rates. Within interferometry, the target of interest can also be
for example sea level variations. \cite{sealevels}

The hardware side of \flex was researched by Esa Turtiainen, Jouko Ritakari, Ari
Mujunen and Minttu Uunila\ci{wp81}. \vbs was designed and developed on top of this
research by me and molded by almost daily dialogue with the authors of the
original Flexbuff design.

\section{Structure of this Thesis} Chapter \ref{chapter:hardware} goes through
the individual hardware components \flex uses and considerations in using those
hardware components. Chapter \ref{chapter:software} lists the operating system
software considerations and motivations. After the background chapters, chapter
\ref{chapter:vlbistreamer} describes the developed software and important design
considerations that were found during development.
Chapter \ref{chapter:experiments} has experimental results showing \vbs working with
network loads up to 20Gb/s on local receive and distributed performance between several
stations. The remaining chapters
 \ref{chapter:discussion} and \ref{chapter:conclusions} are where the results and
future uses are examined.  \label{section:structure} 


