\chapter{Discussion}
\label{chapter:discussion}
\section{Performance}
The desired initial performance was set to the recording of at least a 4Gb/s UDP stream
described in \ref{sub:fila10g} with hardware limits introduced in the
introduction of chapter\ref{chapter:hardware}. For operational purposes an added
requirement was the sending of previous recordings while another recording was
active, which enables recoding of an active session and the correlation of a
previously recorded one. These were achieved fairly early on with the experiment
in \ref{sec:distri}, where a 2Gb/s stream was simultaneously sent while
receiving dummy data between 5.6Gb/s and 7.5Gb/s depending on the station.

After these experiments, the focus was shifted to testing \vbs against the
hardware limits of its underlying system. It was the authors focus to create a
non-hardware bound software solution that would give the software a longer
lifetime. Using two 10GE interfaces simultaneous showed that larger than 10Gb/s
transfers can be used, but seem to cap at the PCI-E bus-limits as observed in
\ref{sendandreceive}. Larger than 10Gb/s UDP streams might still result in
packet loss, but in the spring of 2012 these could not be properly tested yet.
During the experiments, 40GE interfaces were still too rare to get a hold of.
Interface bonding could have been used, but was an unknown method to the author.

An important factor in achieving good performance happens when the packet is
stored from the network into the random address buffer (RAM) buffer. The receiving side of an UDP-stream
gets a continuous stream of data that has no flow control, but after the packet
is in the memory buffers introduced in \ref{sec:archi}, the processing can be
laxed from real-time to best-effort. Releasing the write to persistent storage
from following a tight real-time schedule takes the physical aspect of spinning
disks better into considerations, as e.g. vibrations might cause a sudden drop in
write speeds.
\section{UDP considerations}
\label{udpdisc}
Although all other tests went well, the ones with multiple NICs receiving a few
high speed streams showed packet loss. It seemed that when adding another NIC
into the tests, the packet capture immediately started to suffer from packet
loss. This might be due to a contest on resources. With two NICs, the other will not
have a monopoly on the structs and resources of the packet receiving kernel
resources. The overhead from serializing access to these resources might be the
cause of packet loss with the high speed streams. Increasing the kernel socket
buffer size is one way to avoid this, but perpetually increasing a buffers size
cannot be counted as a final solution.

During the spring of 2012 there existed no actual backends that could hit this
limitation on VLBI-streamer. The issue could be solved with simply moving to a newer
kernel with different scheduling parameters. Also a newer hardware platform
would at least alleviate the symptons.
\section{TCP considerations}
\label{tcpdisc}
Although there was not enough time to test the TCP multistreaming with real data
sets, it showed that the mean transfer rate can be increased by 2.2Gb/s on the
50ms 7Gb/s line from Metsähovi to Italy. Further testing and development could
useful for future developments. Also since \vbs is
starting to be too large a software, this multiplication of TCP streams
could be done outside of it. A simple program could be develop to either convert
a single TCP stream into multiple streams or vice versa. This software could
then run on both ends of a long fat pipe converting in a nearly transparent
style.

Since the characteristics of the Metsähovi to INAF line suit a
typical inside European VLBI network (EVN) session, this could be a superior mode of transfer in
VLBI-sessions where a live recording could be available as a high
bandwidth TCP-stream at the correlator within seconds. The buffering nature of
TCP would also automatically limit the transfer rates to the slowest of
stations, which would be the correlation speed anyway.

\section{Software development considerations}
The project was started with the assumption that memory copies should be avoided
at all cost. Though they should still be avoided, there are some cases
where the complexity of \vbs could have been reduced by moving its features to
smaller units that work as preliminary stages for the data handling.

Examples of this are byte stripping as discussed in \ref{datamani}. If the byte
stripping was done on a very small program that simply spliced data from an UDP
socket and forwarded it via local domain socket to VLBI-streamer, the large
amount of byte stripping logic in \vbs could have been avoided. Since byte
stripping would have required extra memory copies anyway a solution with
separate programs and memory copies in between would probably give even better
performance results than implementing the feature directly into \vbscomma
especially from the software development viewpoint.

During \vbs development a range of different utilities were developed. These
range from network testing to metadata inspection. These parts could be detached
from the original project into separate toolkits for the VLBI-community.
