\chapter{Experiments}
In this chapter we look at how VLBI-streamer handles different network loads in
different network scenarios. The local experiments involve two machines
connected either directly or through a switch to each other. Machines used in
local tests were Ara and Watt in Metsähovi Radio Observatory, with either
optical or CX4 10GE interconnects. These tests mostly simulate the recording of
data sets from a digital backend at an observatory.

Remote tests have Flexbuff like machines on stations separated by relatively
large geographical distances. These tests simulate the sending of data sets to a
correlation facility after the experiment has been recorded. Also the
simulatenous recording of ongoing experiments is tested.

The goal of these experiments is to test if VLBI-streamer can send and record
any data rates available from the digital backends described in section \ref{vlbibacks}.
VLBI-streamers performance in these tests shows if its software architecture
combined with Flexbuffs hardware housing can operate as a radio astronomical
data buffer on stations. VLBI-streamers main function is local receiving of
data, so the experiments are mostly focusing on local receive capabilities

The hardware of the test machines can be found in Appendix \ref{chapter:first-appendix}.
All tests show time as wall-clock time and speed as Mb/s. The measurements are
started a few seconds before the start of the data transfer and continued long
enough to check that performance is stable and would not regress over time.
\label{chapter:experiments}
\section{Local receive with UDP}
In a local receive scenario a VLBI back end is spewing data as UDP packets from
a non-regular network device, that is restricted to generating and sending the
packets. In these experiments \flex is used to record UDP packets in a local
network with a packet generator machine and a receiver machine. Attributes of
interest are the scalability according to memory and disk drives and maximum
throughput values.

\subsection{10GE local receive}
\label{10gelocaludp}
Four different scenarios were tested for local receive: Data streaming of one,
four, sixteen and 128 separate streams being received simultaneously. The results are
shown in \ref{fig:wattlocalstreams10ge} with the total bandwidth in red and
other lines as the individual streams. The data was sent in Metsähovi Radio
Observatory between Ara and Watt through a direct 10GE CX4 connector. The
program sending the data had separate threads for each stream and sent data in a
busy loop.  After the streams were increased from four to sixteen the sending
program did not distribute the packets evenly anymore, but the total bandwidth
remained the same and no packet loss was monitored on the receiving machines
kernel level.

The receiving threads have a higher priority than normal user space processes.
This is to guarantee a fast access to the kernel socket buffer to prevent it
from overflowing. When a large amount of threads were run, as here with 128
threads, the system became slightly unresponsive but captured all packets
without error.  Setting the higher priority can be disabled, so when used with a
large amount of
threads, higher priority could be disabled to keep operability. The
priority is not needed with large amounts of threads, since individual streams
are relatively slow. Also since each stream continuously requires a memory
buffer, the buffer size for 128 streams was dropped to 64MB. This increases the
number of memory buffers, but it can also cause performance issues as each
buffer has a thread running on it.

\begin{figure}
  \centering
  \subfloat[1 Stream]{ \scalebox{\graphwidth}{\input{watt_1_streams_final.tex}} }
  \subfloat[4 Streams]{ \scalebox{\graphwidth}{\input{watt_4_streams_final.tex}} } \\
  \subfloat[16 Streams]{ \scalebox{\graphwidth}{\input{watt_16_streams_final.tex}} }
  \subfloat[128 Streams]{ \scalebox{\graphwidth}{\input{watt_128_streams_final.tex}} } \\
  \caption{10GE Local receive on Watt with dummy UDP-streams created from Ara. 64 buffers for used
  on all tests, except with 128 streams, where 256 smaller buffers were used.}
  \label{fig:wattlocalstreams10ge}
\end{figure}

\subsection{2x10GE local receive}
\label{2gelocaludp}
The 10GE test showed no special bottlenecks, so more tests were done adding
the Myricom fiber channel card and modifying the packet generator software to
fanout its connections on a comma-separated list of targets. Preliminary tests
showed that the increased priority in \vbs caused other services on the system
to stall. This might be because the system root FS is mounted as a network file
system (NFS), and might starve with large amount of high priority threads run on
the system. Since the high priority threads are only needed for almost line rate
single socket receives, the priority was dropped to normal.

Tests showed no packet loss on the receiving ends kernel. Results are shown
in \ref{fig:wattlocalstreams20ge}. With a lower number of threads the sockets
started to have packet loss. Most likely acquiring a fresh buffer for writing
took too long with the heavy load and the kernel socket buffers started to
overflow forcing the kernel to drop packets.

\begin{figure}
  \centering
  \subfloat[24 Streams 2 NIC]{ \scalebox{\graphwidth}{\input{watt_24_streams_2nic_final.tex}} } 
  \subfloat[256 Streams 2 NIC]{ \scalebox{\graphwidth}{\input{watt_256_streams_2nic_final.tex}} } 
  \caption{2x10GE Local receive. Receive on Watt with dummy UDP-streams created from Ara. The 256-stream test used 512 buffers and the 24-stream tests used the default 64-buffers.}
  \label{fig:wattlocalstreams20ge}
\end{figure}

\section{Simultaneous receive and send}
\label{sendandreceive}
An important aspect of buffering astronomical data is the simultaneous sending
of older or current recordings for correlation. This means \flex must be able to
send recordings it is currently receiving and must not compromise the
receiving processes to packet loss.
\begin{figure}
  \centering
  \subfloat[16 Streams 2 NIC send and receive]{ \scalebox{\graphwidth}{\input{watt_16_streams_rs2nic_final.tex}} } 
  \caption{2x10GE Local receive and send in UDP. The red curve is the total send and receive
  bandwidth. Next curve is the receiving process and the lowest is the sending
  process.  A total of 16 receiving processes and 16 sending processes.}
  \label{fig:wattlocalstreams20gers}
\end{figure}
\ref{fig:wattlocalstreams20gers} Shows Watt receiving 16 streams and starting a
send of another 16 streams through the 2 NICs. The receive process lowers from
an average of 1136Mb/s per thread to 891 Mb/s. Since there was no packet loss
registered on the kernel side, the receive process drop must occur on the
pathways between the machines. Nevertheless the total bandwidth in and out
averages at 24Gb/s, which can already start to bottleneck on the PCI express
side, since the NICs are connected with PCI-E 2.0 at only 8x and have shown
capping to below specified speeds in earlier tests.
\section{TCP performance}
On the TCP side the loss of packets is no more a concern. This allows for
metrics on maximum system performance with dummy transfers. These metrics serve
as a kind of hardware limit for our system, over which performance improvements
from software cannot be expected. The used test scripts and programs for invoking these
transfers are included in the \vbs repository. The data for total
network transfers rate was logged with a bandwidth monitor named bwm\_ng
\ci{bwm}.

Before the TCP tests began, the connection between Italy and Metsähovi was tested
to work at about 7.4Gb/s in UDP without packet loss. This works also as a
reference value as a probable upper limit of TCP-transfers.

\subsection{TCP reference values}
It should be noted that the bandwidth monitor does not take into account the
overhead of TCP-transfers and only measures the amount of bytes transferred
between the nodes. Also this overhead cannot be easily factored out from the data,
since variable size packets will cause a variable size overhead. One could
simply measure the total amount of sent payload divided by time sent for
transfer. This way each test stream would try to send its full data payload and
then stop. Due to TCP-transfers being very opportunistic about their used
bandwidth, the streams will exit at different times as others are faster and
others slower. This will cause a test of N-streams to have variable results as
at the start there will truly be N-streams, but in the end N-1, N-2 and so on
depending on how uneven the transfers are.

A small test program was developed to send the same data amount evenly over all
of the streams. It was quickly noted though that this approach will not utilize
the whole bandwidth efficiently as other streams have to stall while others are
trying to catch up. This did not become a problem until 2 10GE NICs were used and
performance started to lag behind. 

Finally a small program named groupsend was modified to work in a threaded mode
and renamed to groupsend\_threaded. The program has a main loop which connects
the TCP or UDP connections, starts the sending threads and monitors the threads
for their amount of payload sent.

Since large buffers with TCP tend to cause a distortion in the values of sent
data as the initially empty buffers are filled rapidly, a similar program
working in reverse to receive data in a threaded mode was also developed by the author and named grouprecv. A thread is spawned for each stream, which empties the buffer in a busyloop. All test programs are included in \vbs sources.

\subsection{Local network TCP tests}
\begin{figure}
  \centering
  \subfloat[2x10GE NIC dummy receive in TCP\label{fig:tcpwatt:a}]{ \scalebox{\graphwidth}{\input{refvalues_tcp_2nic.tex}} } 
  \subfloat[2x10GE NIC dummy sending and receiving with TCP\label{fig:tcpwatt:b}]{ \scalebox{\graphwidth}{\input{refvalues_tcp_2nic_rs.tex}} } \\
  \subfloat[2 Streams 2x10GE send and receive in TCP with \vbs\label{fig:tcpwatt:c}]{ \scalebox{\graphwidth}{\input{tcp_dualreceive_andsend.tex}} } 
  \subfloat[8 Streams 2x10GE send and receive in TCP with \vbs\label{fig:tcpwatt:d}]{ \scalebox{\graphwidth}{\input{watt_8_nc_tcp_rands.tex}} } 
  \caption{2x10GE Local network receive and send.}
  \label{fig:tcpwatt}
\end{figure}

In \ref{fig:tcpwatt} the graphs \ref{fig:tcpwatt:a} and \ref{fig:tcpwatt:b} show the system performance on a
variable number of TCP streams. This works as a good estimate for a roof value
to which \vbs is tested against. \ref{fig:tcpwatt:a} shows the receive speed of
the data payload on two 10GE links in the local network. \ref{fig:tcpwatt:b}
shows how the system performs when simultaneously sending and receiving with
TCP. This is a relevant test as the receive process clearly caps at a network
limit, where as the input and output tests caps at some plethora of system
restrictions, most likely capping of the PCI-E lanes. 

Figures \ref{fig:tcpwatt:c} and \ref{fig:tcpwatt:d} show \vbs working very
closely and exceeding these reference limits. This means the reference tester
programs were not fully stressing the system and could use some more
development.

\subsection{Long range TCP tests}
\label{lrtcp}
\begin{figure}
  \centering
  \subfloat[Variable TCP-streams]{ \scalebox{1.1}{\input{groupsend_plot_inaf.tex}} } 
  \caption{Long range TCP transfer rates between Metsähovi and INAF which had a 50ms delay. The mean value of the transfers rises as the number of streams increases}
  \label{fig:tcpdummies}
\end{figure}

As speculated in \ref{tcpmultistream} the dividing the TCP streams
into substreams could give beneficial results on long fat pipes. Figure
\ref{fig:tcpdummies} shows just how this technique raised the mean transfer
rates on a particular pipe. A single transfer stream here hits its congestion
limit every 20 seconds shown by the sharp drop in transfer speed. As explained
in section \ref{ref:tcp} this is the visible effect of a long ramp-up time which
drops the mean transfer speed considerably. The ramp-up time increases as the
probing of the TCP-path is affected by the delay. This is in contrast to the
local receives, where ramp-up is not visible in the graphs as the TCP stream
gets feedback from the TCP path almost instantaneously. The full path between
the stations was measured to exhibit packet loss after using UDP transfer over a
rate of 7Gb/s. This gave a good reference value against which to compare TCP
results.

\section{Distributed performance}
\label{sec:distri}
\flex is meant to operate on individual stations, which have long geographical
distances between them. \flex was tested running on 4 stations in addition to the
central correlator. Each station was set to send a pre-recorded correlating
dataset to the central correlator and record a stressing stream at the same
time. This emulates the default behaviour of receiving an astronomical session
locally from the FiLA10G and sending an already recorded set onward for
correlation. The test setup is illustrated in figure \ref{fig:disttestsetup}

As explained in \ref{sub:fila10g} the receiving machine at the joint institute
for VLBI in Europe (JIVE) has to strip 8 bytes from each header. This adds some
processing overhead as each packet must be written separately with the writev
backend and DIRECT\_IO cannot be used.
\begin{figure}
  \begin{center}
    \begin{adjustwidth}{-3cm}{-1in}% adjust the L and R margins by 1 inch
      \scalebox{0.8}{\input{testsetup.tex}}
    \end{adjustwidth}
  \end{center}
  \caption{Distributed test setup}
  \label{fig:disttestsetup}
\end{figure}
The different speeds for the stressers are due to the station machines having
different capabilities for receiving.
\begin{figure}
  \centering
  \subfloat[Jodrell Bank]{ \scalebox{\graphwidth}{\input{jb_final.tex}} }
  \subfloat[Medicina]{ \scalebox{\graphwidth}{\input{inaf_final.tex}} } \\
  \subfloat[Metsähovi]{ \scalebox{\graphwidth}{\input{dwatt_final.tex}} }
  \subfloat[Onsala]{ \scalebox{\graphwidth}{\input{onsala_final.tex}} } \\
  \subfloat[JIVE]{ \scalebox{\graphwidth}{\input{jive_final.tex}} }
  \caption{Distributed test}
  \label{fig:disttest}
\end{figure}
The Onsala graph \ref{fig:disttest} shows a very unstable receive rate, where instead of a stable packet recording, the graph shows heavy undulation between 4.8 Gb/s to 6.8 Gb/s. This was due to packet loss, which was
registered at the kernel level. The machine was probably lacking some
optimization steps, but there was insufficient time to optimize the machine and
it would have been quite risky to try to tune a machine 600km away just hours
before the experiment. Watt was unstable due to a long FUSE development on it,
that left a lot of defunct processes. The stalling behaviour would have been
fixed with a reboot, but went unnoticed during the experiments.

Jodrell bank was limited to 1890Mb/s payload speed on their so called JBOD link. This set a common upload limit as the correlation was limited by the lowest upload speed. While the data was still being received in JIVE the correlation of the pre-recorded data set was tested successfully by the engineers at JIVE. As the FUSE system was already in development, it was also tested and shown to work. The FUSE-system enabled the use of single large files as the correlation data, instead of the default separate memory buffer size files. There were some lockups in FUSE though, that are likely due to a deadlock situation.

Some of the correlation load was also distributed to the receiving machine in JIVE, which showed no degradation in performance. Most likely the higher priority in \vbs protected its receive-process.
