\chapter{Hardware}
\label{chapter:hardware}

This chapter describes each relevant hardware component for high speed networked
data recording. What exactly is relevant for this particular project was studied
earlier by Esa Turtiainen et al.\ci{wp81}. During the spring of 2012 the components
of \flex were roughly:
\begin{itemize}
\setlength{\itemsep}{0pt}
\item 10 Gigabit ethernet (10GE) network card.
\item Large pool of hard drives (24+).
\item Enough serial advanced technology attachment (SATA) controllers to control the hard drives.
\item 12 or more gigabytes (GB) of modern high speed memory.
\item Modern multi core central processing unit (CPU).
\item Advanced technology eXtended (ATX) motherboard and rack mounted chassis to house it all.
\end{itemize}
It should be noted that all of these components will probably be replaced by more
advanced components shortly after, probably sporting decade larger performance values.
The term \flex was coined to describe this type of hardware arrangement. Next are a few
issues that were relevant during the spring of 2012 and which influenced
VLBI-streamers design.

The coming topics will cover different techniques along the data sets path from the
network to a persistent storage. These can be segmented by looking first at the network
card receiving the data packets, then at the bus connecting the network card to memory,
how the data is moved from memory to the buffer of a persistent storage device and finally
the techniques in writing to a persistent storage.

\section{Network interface cards}
Flexbuffers are mostly equipped with 10GE network cards for data transfer
and 1GE links for management. There are a plethora of features supported by the
various 10GE network cards. As will be explained in \ref{vlbibacks} the first part of
the transfer is connectionless UDP-packets so there will be less emphasis on TCP-specific
techniques.

\subsection{Interrupt mitigation}
The default behaviour for packet receiving in an operating system (OS) is invoking of an interrupt
routine to handle the packets transfer from the network card to the kernel
memory space. Since interrupt routines are high priority and can cause trouble
if they perform too long operations, these interrupt routines usually simply set
flags for the default OS threads to handle the raw data transfers.
With 10GE ethernet, the rate of interrupts can rise very high
as the number of packets per seconds increases.
With for example 2048 byte packets: $\frac{10 Gb/s}{(2048*8)b} = 655360
p/s$. So there would be 655360 interrupts per second. Since network card
interrupts are usually bound to a single core of a multicore processor the core
would most likely be congested in context switches, which could result in packet
loss and poor efficiency. 
\cite{interruptmigi}

The problem is addressed with interrupt mitigation, which sets a time limit
during which only one interrupt is allowed. The network card will buffer the
packets, until the OS in the next interrupt handles a group of packets. This
reduces the amount of context switches and testing showed this feature to limit
the interrupt rate to consume about 10\% of the cycles of a single core.
The time limit should not be too long though, as it increases the latency and
with high enough bit rate the network card's buffers can overflow. \cite{interruptmigi}

On the Kernel software side interrupt mitigation is handled by the new
application programming interface (NAPI), which does not require any action from
the user space developer's side.  \cite{napi}

\subsection{Infiniband}
Infiniband is an interconnect used mostly for high performance
computing (HPC) and data centers. Infiniband potentially could provide fast
interconnects abstracted away from common network I/O concepts and troubles. The
largest benefit would come from remote memory accesses described below.

In VLBI the distances between stations is large, which means relying on existing
network connections, Bandwidth On Demand (BOD) links and the public Internet, in
where infiniband-connections have no quarter. It should be noted though that
VLBI-streamers modular architecture allows the development of an infiniband
module.  \cite{infini}

\subsection{Remote Direct Memory Access} Remote Direct Memory Access (RDMA)
enables sending between process and memory of interconnected nodes without the
continuous involvement of either sides CPUs. It is a sort of bypass without extra
copying through kernel space.
Infiniband sports RDMA, but some normal NICs have similar capabilities through
a protocol called iWARP, which shows promise in bandwidth utilization. \cite{iwarp}

The RDMA draft is quite new though and has some security considerations with
exposing a memory segment in public Internet. \cite{rdma_over} Again it is not
ruled out, as a module for it can be developed for \vbsdot


\subsection{NIC tuning}
Network card improvements can be gained through adjusting several parameters:
\begin{itemize}
  \item Increasing kernel receive buffer size.
  \item Interrupt mitigation and interrupt intervals.
  \item Adjusting backlog length for TCP-connections.
  \item Experiment and research a suitable congestion control protocol for TCP depending on the connection
\end{itemize}
For UDP connections most of the tests have the kernel receive buffer set to
16MB.
\section{VLBI standard interface}
\label{ref:vsi}
VLBI standard interface (VSI) described in \cite{vsispec}, is a custom
parallel interconnect for VLBI data. In addition to data pins it sports clock
signals that enable connected devices to share a pulse per second (PPS) between
them for synchronization. \flex comes into play after the data has already been
structured into packets, but a module for a peripheral component interconnect
(PCI) lane connected VSI-card is a possibility.
\section{SATA controllers}
\subsection{Advanced Host Controller Interface}
Advanced Host Controller Interface (AHCI) is a data movement engine which
abstracts the SATA 2 control away from the host machine and implements a
standard interface. I/O requests are scheduled by signaling with appropriate
AHCI ports. Completed requests are signaled by aggregated (mitigated)
interrupts. \cite{ncq}
\subsection{Native Command Queuing}
Native Command Queuing (NCQ) is a feature implemented in SATA 2 that runs in the
disk firmware rearranging access requests to optimize throughput and reduce head
seeks. \cite{ncq}

NCQ for \vbs means that small request are aggregated to larger ones when they
are targeted to a sequential file strip. Most of modern hard drives have
implemented NCQ through AHCI and using it does not require any extra tuning.
\section{Disk Drives}
VLBI data recording has evolved from tuned tape drives, video cassette recorders
to hard drives. The emergence of solid state drives might eventually replace
spinning disk drives, but currently the capacity and price of spinning disk
drives makes the older technology more usable. Spinning disk drives can also be used
as efficient recording media if their characteristics are taken into account.

Everything described in this section can be found more thoroughly written in \cite{disk_drives}.
\subsection{Spinning disk drives}
Traditional hard drives have spinning platters, where data is preserved
magnetically. Most of the performance costs occur when seeking for data
from another location on the platter. The cost of physically moving the read
head is very large when comparing it with CPU cycles. Also the seek head
must seek to the correct location on the track.

In data access patterns this means that random accesses are very costly. Again
in data recording, this encourages to use sequential writes and reads to get the
largest bandwidth. When reading or writing sequential data the data rate
depends if the data is on the outer or inner tracks.
The difference is due to inner tracks requiring more frequent seeks, since they
contain less data per track.

There is a possibility that all disks of a system would start to converge on
the inner tracks at the same time as all the disks are the same size and model
with uniform performance characteristics. This would cause the total system
throughput to drop dramatically. If this is seen as a real threat
scenario, the suggested fix is to repartition the disks with non-uniform block
division and mounting them randomly as write points. This way different tracks
fill up first and the performance will stay randomly uniform during operation.

Modern disk drives also have large caches to mitigate random data access costs.
This means the data might not be yet written to the hard drive, but is sitting
in the cache. For our purposes, these caches have no function, since they are
minuscule when compared to the data volume recorded in regular VLBI sessions.

Most spinning disk drives have also an internal request queue. Since the
mechanical delay of seeking is quite large, all incoming requests for data are
set in a queue, which the disk drive can arbitrarily rearrange to minimize
mechanical delay. As will be explained in \ref{sec:vfs} the OS also does this
rearranging and combining.
\subsection{Solid state drives}
Solid state drives (SSD) are NAND-flash based storage units. In addition to
more read or write speeds, a bit advantage of SSDs is their ability to perform
random access data operations almost as fast
sequential ones, due to the absence of moving parts. During the spring of 2012
the capacity and price per gigabyte on SSD drives is still way behind
traditional hard drives, but SSD technology might someday replace traditional
hard drives.
