% LaTeX Template f�r Seminar "Wissenschaftliches Arbeiten" 182.697
% Uses IEEEtran style, adapted from bare_conf.tex
%
% v0.1  U. Schmid  26.9.2014    Initial version

% Ein paar n�tzliche Makros

\newcommand{\zitat}[1]{\lqq \emph{#1}\rqq}
\newcommand{\lqq}{\lq\lq}
\newcommand{\rqq}{\rq\rq}

%% bare_conf.tex
%% V1.3
%% 2007/01/11
%% by Michael Shell
%% See:
%% http://www.michaelshell.org/
%% for current contact information.
%%
%% This is a skeleton file demonstrating the use of IEEEtran.cls
%% (requires IEEEtran.cls version 1.7 or later) with an IEEE conference paper.
%%
%% Support sites:
%% http://www.michaelshell.org/tex/ieeetran/
%% http://www.ctan.org/tex-archive/macros/latex/contrib/IEEEtran/
%% and
%% http://www.ieee.org/

\documentclass[conference,A4]{IEEEtran}

% *** GRAPHICS RELATED PACKAGES ***
%
\ifCLASSINFOpdf
  \usepackage[pdftex]{graphicx}
  % declare the path(s) where your graphic files are
  % \graphicspath{{../pdf/}{../jpeg/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  % \DeclareGraphicsExtensions{.pdf,.jpeg,.png}
\else
  % or other class option (dvipsone, dvipdf, if not using dvips). graphicx
  % will default to the driver specified in the system graphics.cfg if no
  % driver is specified.
  % \usepackage[dvips]{graphicx}
  % declare the path(s) where your graphic files are
  % \graphicspath{{../eps/}}
  % and their extensions so you won't have to specify these with
  % every instance of \includegraphics
  % \DeclareGraphicsExtensions{.eps}
\fi

% *** PDF, URL AND HYPERLINK PACKAGES ***
%
\usepackage{url}
% url.sty was written by Donald Arseneau. It provides better support for
% handling and breaking URLs. url.sty is already installed on most LaTeX
% systems. The latest version can be obtained at:
% http://www.ctan.org/tex-archive/macros/latex/contrib/misc/
% Read the url.sty source comments for usage information. Basically,
% \url{my_url_here}.

% *** Do not adjust lengths that control margins, column widths, etc. ***
% *** Do not use packages that alter fonts (such as pslatex).         ***
% There should be no need to do such things with IEEEtran.cls V1.6 and later.
% (Unless specifically asked to do so by the journal or conference you plan
% to submit to, of course. )
\usepackage{newfloat}
\usepackage[dvipsnames]{xcolor}
\usepackage{hyperref}
\hypersetup{
    colorlinks=true,
    linkcolor=RedViolet,
    citecolor=RoyalPurple
}

\DeclareFloatingEnvironment[fileext=lod]{diagram}

% correct bad hyphenation here
\hyphenation{op-tical net-works semi-conduc-tor}

% unnumbered footnotes, taken from benevolent wikibooks authors
% https://en.wikibooks.org/wiki/LaTeX/Footnotes_and_Margin_Notes
\makeatletter
\def\blfootnote{\xdef\@thefnmark{}\@footnotetext}
\makeatother

\newcommand*{\fullref}[1]{\hyperref[{#1}]{\autoref*{#1} \nameref*{#1}}} % One single link

\begin{document}
%
% paper title
% can use linebreaks \\ within to get better formatting as desired
\title{Interconnect for commodity FPGA clusters: standardized or customized?*}

% author names and affiliations
% use a multiple column layout for up to three different
% affiliations
\author{\IEEEauthorblockN{name lastname}
\IEEEauthorblockA{matriculation number: matriculation number\\
study code: Computer Engineering\\
university\\
Email: \textcolor{NavyBlue}{email}}}

\maketitle

\begin{abstract}

The question of interconnect in an FPGA cluster is one designers of such a system are invariably confronted with.
Although there are soft cores for standard protocols such as Ethernet, RapidIO, Infiniband and Interlaken, each with a right to exist particularly for FPGA-to-other-systems, they are arguably inefficient and unnecessary for FPGA-to-FPGA interconnect.
We compare standard and custom protocols by examining how well they satisfy a well-defined set of requirements.
Using our custom communication protocol BlueLink, which was designed to use commodity FPGAs for the BlueHive FPGA cluster, we demonstrate that a few customizable interconnect components permit low-area, high-performance, reliable communication tuned to an application, making a case for custom communicate just like FPGAs are used for custom compute.
These properties are desirable, especially with increasing numbers of serial links.

\end{abstract}

\blfootnote{*This article is a rewriting/extension of A. Theodore Markettos, P. J. Fox, S. W. Moore, A. W. Moore, "Interconnect for comodity FPGA clusters: standardized or customized?", \textit{2014 24th International Conference on Field Programmable Logic and Applications (FPL)}, 2014, pp. 1-8 \cite{theo14}. It has been written in the course of the seminar "Scientific Writing" (193.052) at TU Vienna, which shall help undergraduate students to get first experience in the art of scientific writing.}

\section{Introduction}
\label{sec:introduction}

With the rise of field programmable gate arrays (FPGAs), accelerators making use of them have become of interest.
There are many use cases where an application can benefit from FPGA systems, but implementing them comes with its own set of challenges and design questions.
As FPGAs are bounded in their potential by the resources they offer or can offer, it is, depending on the specific application, of interest to consider FPGA clusters.

This bound is given by specific constraints, such as packaging limits for example, which limits how many I/O pins and thus dual in-line memory modules (DIMMs) can be attached.
Thus we want to tackle the problem of building multi-FPGA systems.
The first question that confronts us is the choice of FPGA.
Indeed, there is a large market with many FPGAs available, but the features, support and pricing are overall chaotic.
This question of FPGA choice, as well as a rudimentary market analysis will be given in \fullref{sec:market}.

The next topic that concerns us following the choice of FPGA is application partitioning.
Due to the diversity of applications and their often unique constraints, we will make an attempt at a classification in \fullref{sec:partitioning}.

Afterwards we consider in greater detail the specifics of communication in multi-FPGA systems and compare pre-existing interconnect protocols to then make a case for \textit{customized} interconnect as opposed to \textit{standardized} interconnect.
This will be done in \fullref{sec:requirements}, \fullref{sec:comparison} as well as \fullref{sec:svc}.

Finally, we will have a look at our custom FPGA interconnect toolkit that was designed for a specific application in \fullref{sec:stack}, consider its abstractions in \fullref{sec:network} and examine a real world example in \fullref{sec:application}.

Related work is examined in \fullref{sec:related} and concluding remarks are given in \fullref{sec:conclusion}.

\section{Market and Cost Considerations}
\label{sec:market}

In multi-FPGA systems, there are not only technical specifications to watch out for, but also cost considerations that have to be made.
Figure \ref{fig:market} shows trends in FPGA device pricing that is useful particularly when cost is a factor.
As can be seen, some premium devices are not necessarily equipped with more resources, so it makes economical sense to think about using commodity devices that are commonly sold to engineers for prototyping and are increasingly used for research and development.

\begin{figure*}[ht]
    \begin{center}
        \includegraphics[width=2\columnwidth]{res/bluelink-market.png}
        \caption{FPGA pricing trends where devices can be roughly clustered into two categories: smaller budget devices with lower cost per logic element, as well as premium devices that are not necessarily bigger, but cost considerably more. Cost considerations might make it important to choose budget devices wherever possible, or at least smaller premium devices. Data sourced from DigiKey \cite{digikey-ic}. Also plotted is a median price for a given model and size (coalescing packaging/speed grade options into a single point).}
        \label{fig:market}
    \end{center}
\end{figure*}

Something that should also be considered: there are more indirect sources of cost, such as development time and engineering effort.
As an example, the DINI group quote 'below 0.1c per [ASIC] gate' for a '130 million ASIC gate' system, where 20 Stratix IV 530 devices are utilized.
This results in a price per board of around US\$130,000.
\cite{dini-big-fpga}

Our choice of FPGA also has an influence on developer time and support, with lower end devices generally being more accessible and having lower cost tooling or IP cores.
The cost resulting from non-recurrent expenditures (such as design and tool costs) amortizes across all board units that are shipped, bringing the cost further down.
Importantly, a low cost device can be swapped out for a relatively small price.
The same does not apply for more exotic designs that prove irregular.
In the example of Mencer et al. \cite{menc09}, where 64 Spartan 3 FPGAs have been used on a large 8-layer PCB (320x480 mm), replacing a faulty device is difficult.
They employed fault tolerance instead, which by design reduced the available resources, something we want to avoid at this stage.
This leads us to the conclusion that it makes economical sense to build a cluster out of commodity FPGAs and will in turn become our focus.

\section{Problem Partitioning}
\label{sec:partitioning}

There can be quite a difference between applications, which of course has an impact on the requirements of the interconnect between FPGAs.
We might want to consider different applications and attempt a classification to better judge this impact.
Perhaps the most important factor is how much inter-device communication takes place.
After all, communication overhead puts a strain on the interconnect infrastructure.
If a specific problem requires no inter-device communication, we might refer to the problem as \textit{loosely coupled}.
Similarly, when inter-device communication cannot be avoided, our problem is \textit{tightly coupled}.

The class of loosely coupled problems can be tackled by existing and wide-spread accelerator frameworks.
MapReduce is an example that is available as a computing platform for general purpose devices.
There is also the class of OpenCL-based accelerators.
This simplicity in problem partitioning for loosely coupled problems allows for simpler clusters to be built.

Additionally, there can be latency requirements (we will consider an example in a bit).
These requirements can for instance arise when multiple devices have to operate synchronously and in lock-step.
This latency requirement can make problem partitioning difficult, much more so in the case of a tightly coupled problem, where all of a sudden the application requires our interconnect network to function in a timely fashion.

\subsection{Neural computing}

A fitting application for a multi-FPGA system would be neural computing.
When modelling the human brain, we are dealing with a model that comprises $10^{11}$ neurons with $10^{14}$ synaptic connections.
If our goal is let such a model operate in real-time, many messages have to be sent.
Considering that each neuron fires at about 10 Hz, there are approximately $10^{15}$ synaptic messages to be sent every second.
Neuron updates can be represented by a simple differential equation, but there is still a staggering multiplicity of computations to be performed.

Incidentally, it is this need for timely delivery of many small, low-latency messages that make multi-FPGA systems a good target for this problem.
CPUs struggle with the computational power while GPUs lack the communication facilities.

To make this example more concrete, we consider our previous work for the BlueHive system \cite{moor12}, where the Izhikevich neuron model \cite{izhi03} is examined.
With 48 bits per synaptic message and 128K neurons per FPGA, a total 1.28M 48-bit synaptic messages are generated for every FPGA every millisecond.
Every message has a real-time deadline of arriving before the next millisecond.
Of course, this worst-case can be averted by the fact that some neurons message other neurons on the same FPGA, although this effect is dependent on the concrete neural network being simulated.

\section{Interconnect Requirements}
\label{sec:requirements}

At this stage we want to consider interconnect requirements in general, to better understand differences between different interconnect protocols. Subsequently, we will identify specific properties that our interconnect needs to fulfill and summarize them.

\subsection{Connecting devices}

When trying to connect together hundreds of FPGAs, multiple boards will be necessary.
This raises the question of what types of connections are even feasible.
A possibility would be the use of GPIO pins.
Although utilizing them would be simple, there are limitations, for example driving them with low-voltage differential signalling (LVDS) caps us at about 1 GHz.
Other limiting factors particularly for parallel links would be: signal integrity (signals arrive incorrectly) and skew (signals don't arrive simultaneously).
With these constraining factors, it's difficult to build a multi-FPGA system of considerable size.

As for HSMC connectors, work by Kono et al. \cite{kono12} has demonstrated a data rate of 4 Gbps on Terasic DE4 boards, albeit using expensive proprietary ribbon cabling.
Because every board had only two ports, a ring topology was the only option.

There are also high-speed serial transceivers.
After all, we have seen a shift from parallel to serial interconnect (USB, SATA, etc.) for commodity I/O standards, which has enabled a market for cheap passive multi-gigabit serial cables.
A single FPGA can offer up to 96 transceivers with speeds ranging from 14 Gbps up to 56 Gbps.

As a result, it follows that a scalable cluster can result by opting for the following: commodity FPGA boards, serial interconnect with transceivers, low-cost commodity passive copper cabling (can optionally be optical cabling if necessary) as well as multi-hop routing (to enable topologies with lower connectivity).
Such a cluster has the advantage of being economical as well.

\subsection{Network properties}

We have considered the specific problem of simulating neural networks.
For problems like these, the task of building a multi-FPGA accelerator platform relaxes the message size requirements as payload sizes are small (48 to 256 bits), but demands low latency.
The network also needs to be reliable.
Because there are different forms of reliability, it should be clarified that in the example of neural computing, dropped messages invariably result in simulation inaccuracies.
Applications might also not support retransmission by themselves.
This demands a reliable network where messages are sure to arrive within the limits of the latency requirement.

\begin{table}
    \centering
    \begin{tabular}{ll}
        \hline
        \textit{small message sizes} & messages are constrained between 32 and 256 bits \\
        \textit{low latency} & latency is often more of a constraint than bandwidth \\
        \textit{reliable} & as faults are inevitable in a system with thousands of links \\
        & each running at gigabits per second, the system must be \\
        & reliable \\
        \textit{hardware-only} & all these requirements are to be implemented in hardware \\
        & and should be supported by the interconnect \\
        \textit{lightweight} & FPGA area remains a factor that cannot go unconsidered \\
        & and should thus be minimized \\
        & (particularly to enable small FPGAs) \\
        \textit{ubiquitous} & transceiver should be fully exhausted to maximize links and \\
        & link rates and thus gain bandwidth and reduce hops \\
        \textit{interoperable} & different FPGAs should be able to operate heterogeneously \\
        & in the same cluster \\
        \hline
        \vspace{1ex}
    \end{tabular}
    \caption{\label{tab:requirements}Required Properties of Interconnect.}
    \vspace{1ex}
\end{table}

The resulting requirement properties are summarized in Table \ref{tab:requirements}, where each of the properties is elaborated upon.

\section{Comparison of Standardized protocols}
\label{sec:comparison}

Next up we want to consider what interconnect protocols are already available.
Because re-use of IP cores can greatly reduce costs by driving down developer time, such a consideration is justified.
We list specifications of pre-existing and standardized interconnect protocols in Table \ref{tab:specs}.
These specifications consider in particular the available performance in the form a raw external link rate and constituent lane rate, while listing required resources.
These resources include specific device configurations, but also LUTs, registers and memory bits used.

Besides the ubiquitous Ethernet (which is often appropriate when dealing with loosely coupled problems, as Liang et al. \cite{chen13} show), this selection of standard cores includes the following additional protocols: Serial RapidIO, Infiniband, Interlaken, Fibre Channel, PCI Express, SerialLite as well as Aurora.
For completion, TCP/IP implemented in hardware is also included, even though it is often not used in such a context due to the expensiveness in terms of hardware resources.
In the work of Liang et al. \cite{chen13} but also Nejad et al. \cite{neja11}, TCP/IP was not considered at all.
Protocols without in-built support for reliability and packet retransmission were also included, as they could be used in conjunction with an additional reliability layer on top, even though in-built support would be preferable.

\begin{table*}[t]
\begin{minipage}{\textwidth}
    \centering
    \begin{tabular}{lcccrrr}
        \hline
        \hline
        \textbf{System} & \textbf{Raw external} & \textbf{Configuration} & \textbf{Constituent} & \textbf{LUTs} & \textbf{Registers} & \textbf{Memory bits} \\
        & \textbf{link rate} & & \textbf{lane rate} & & & \\
        \hline
        \hline
        \textbf{Systems with reliable transmission} \\
        \hline
        TCP/IP (in hardware) + Ethernet \cite{intilop-toe} & 10G & 1 lane & 10G & $<$ 30000 & Not quoted & Not quoted \\
        TCP datapath acceleration (Virtex 6) & 10G & excluding CPU/MAC/PHY &  & 6875 & 3889 & 221184 \\
        SerialLite II (Straix II 16 bit CRC) & 6G & 1 lane & 6G & 1448 & 1236 & 90624 \\
        & 24G & 4 lane & 6G & 2573 & 1659 & 176640 \\
        PCIe hard IP & 5G & 1x Gen 2 & 5G & 100 & 100 & 0 \\
        & 40G & 8x Gen 2 & 5G & 200 & 200 & 0 \\
        PCIe soft IP (Stratix IV) & 5G & 1x Gen 2 & 5G & 5500 & 4100 & 82944 \\
        & 20G & 4x Gen 2 & 5G & 7100 & 5100 & 239616 \\
        Serial RapidIO & 5G & 1x & 5G & 5700 & 7885 & 737280 \\
        & 20G & 4x & 5G & 7200 & 10728 & 901120 \\
        Fibre Channel (Stratix IV) \cite{morethanip} & 8G & 1x & 8G & 3300 & 3900 & 6144 \\
        Infiniband \cite{infiniband-cores} & 40G & LLC+TCA QDR 4x & 10G & 64105 & 63185 & 1584846 \\
        \hline
        \textbf{Systems that do not implement reliable transmission} \\
        \hline
        Infiniband \cite{infiniband-cores} & 40G & TCA QDR 4x & 10G & 36658 & 39912 & 1536807 \\
        SerialLite II (Stratix II) & 6G & 1x & 6G & 863 & 818 & 50688 \\
        SerialLite III \footnote{Figures not available for optional reliability extension} \footnote{\label{note-b}Provides insufficiently robust optional single bit error protection} & 120G & 12 lanes & 10.3125G & 5600 & 6200 & 983040 \\
        Aurora 8B/10B \cite{aurora} & 12G & 4 lanes & 3G & 3473 & 3319 & 75776 \\
        Aurora 64B/6B \cite{aurora} & 14G & 1 lane & 14G & 1600 & 1600 & 37920 \\
        & 56G & 4 lanes & 14G & 3500 & 3900 & 43172 \\
        1000Mb Ethernet MAC (external PHY) & 1G & 1 port RGMII & 125Mx4 DDR & 1184 & 1704 & 204976 \\
        1000base-X Ethernet MAC & 1G & 1 lane & 1.25G & 1805 & 2365 & 204976 \\
        10/100/1000Mb Ethernet MAC (ext. PHY) & 1G & 1 port RGMII & 125Mx4 DDR & 3155 & 3522 & 328064 \\
        & 1Gx12 & 12 port GMII & 125Mx8 SDR & 27360 & 29272 & 1479168 \\
        10Gb Ethernet MAC & 10G & 1 lane & 10.3125G & 2001 & 3077 & 0 \\
        40Gb Ethernet MAC & 40G & 4 lanes & 10.3125G & 13600 & 23500 & 184320 \\
        100Gb Ethernet MAC & 100G & 10 lanes & 10.3125G & 45100 & 87700 & 573440 \\
        Interlaken 100G \footnote{See footnote a} & 124G & 12 lanes & 10.3125G & 18900 & 36800 & 778240 \\
        Interlaken 50G \footnote{See footnote a} & 50G & 8 lanes & 6.25G & 12200 & 26300 & 942080 \\
        Interlaken 20G (Stratix IV) & 25G & 4 lanes & 6.25G & 12229 & 16774 & 479232 \\
        \hline
        \hline
        \vspace{1ex}
    \end{tabular}
    \caption{\label{tab:specs}Standard interconnect cores and their area, with data for Stratix V devices and for Altera cores from \cite{altera-interface-protocols} unless otherwise stated. Optional features have been excluding in favor of taking the minimal design.}
    \vspace{1ex}
\end{minipage}
\end{table*}

\subsection{Disqualifying traits for standardized protocols}

In order to make sure that our choice of protocol fulfills our interconnect requirement properties, a more fine-grained look is necessary.

\subsubsection{Ethernet}

Specifically for tightly coupled application, Ethernet is restrictive. Nejad et al. \cite{neja11} for example have used 37-bit payloads over Ethernet.
To make the best use of resources these payloads have to be aggregated in packets, increasing latency.
Ethernet also has no in-built reliability.
TCP/IP exists for this purpose, providing guarantees of packet delivery, but is too expensive and thus not ideal.
Like previously expressed, a reliability layer could be built on-top to provide reliability for Ethernet.

\subsubsection{PCI Express}

PCI Express (PCIe) can be used to connect FPGAs to a host PC.
By virtue of emulating traditional PCI over switched interconnect, its complexity is high.
That complexity is typically contained in PCIe hard cores, but PCIe hard cores support is not as extensive.
Typically FPGAs have only one PCIe hard core.

\subsubsection{Interlaken}

Perhaps more promising is Interlaken.
The protocol is designed to be scalable and relatively lightweight and is commonly used in high-end switches for backplane interconnect.
However, Altera's Stratix V core needs groups of eight/twelve bonded links to implement 50G/100G channels.
Because this constraint was incompatible with the physical topology of Stratix V boards available to us, our attempt at providing an alternative Interlaken layer was not fruitful.
There is another Interlaken core for Stratix IV devices that only needs groups of four or more channels, but it's incompatible with other devices and has no reliability support.

\subsubsection{Altera SerialLite}

SerialLite is a core provided by Altera that is reasonably lightweight.
Particularly SerialLite II would fit well, as it offers packet retransmission for small packets.
It suffers from neglect however, area numbers for example are only available for Stratix II.
Although it can be used on more modern FPGAs, the maximum link rate is constrained to 6 Gbps.
Because this core is proprietary, testing it live on an FPGA was not possible due to licensing restrictions.

SerialLite III modernizes SerialLite and can offer speeds of 10 Gbps and above.
The protocol was changed to only support single bit forward error correction.
The reliability requirement is not fulfilled, as in the case of a cluster with possibly thousands of links, cabling faults causing more substantial errors are likely.
SerialLite III should thus be considered as a protocol without reliable transmission.

\subsubsection{Aurora}

Aurora is similar to SerialLite, but is provided by Xilinx.
There is no reliability built in, which limited the usable link rate in the examples of Bunker et al. \cite{bunk13} (FPGA clusters) and Kouadri-Mostefaoui et al. \cite{koua08} (SoC prototyping).
Like with other protocols, reliability would have to be built on top.

\section{Standardized vs Customized}
\label{sec:svc}

\subsection{Custom Communication}

Having looked extensively at standardized protocols, we can consider if it would be advantageous to go for custom communication.
Just like \textit{custom compute} is employed by FPGA designers, an approach typically more effective than using standard CPU soft-cores on FPGAs, \textit{custom communication} can be employed.
With custom communication we are either forsaking standard interconnect protocols or extending them with additionally needed layers.
Running a non-standard communication protocol comes with its own set of challenges and can be costly in terms of developer time, but just like in the example of custom compute, such an approach can extract additional performance.

As we have already seen in \fullref{sec:comparison}, many practical difficulties arise when using standard IP cores in an FPGA cluster.
These difficulties (many of which are configuration related) are described and summarized in more detail in Table \ref{tab:difficulties}.

\begin{table}
    \centering
    \begin{tabular}{ll}
        \hline
        \textit{configuration constraints} & available parameters (for example link rate or number \\
        & of bonded lanes) might not be appropriate \\
        \textit{fitting requirements} & standards might demand unfulfillable requirements \\
        & (clock frequencies, PLLs or clock routing) \\
        \textit{bonded links} & commodity board and serial cabling might not provide \\
        & a suitable configuration (insufficient lanes, unsuitable \\
        & placement or skew over different cables) and can add \\
        & additional latency by reducing the idmensions of the \\
        & cluster (compared with single links) \\
        \textit{manufacturer specificity} & protocols like SerialLite and Aurora are exclusive to \\
        & specific manufacturers and reimplementation would \\
        & involve an additional core vendor or a custom \\
        & implementation \\
        \textit{FPGA support} & cores might not support some FPGAs, might be \\
        & withdrawn in new tools or neglected, requiring \\
        & extensive reworking or disallowing the use of \\
        & newer FPGAs \\
        \textit{licensing} & licensing IP cores can prove expensive \\
        \hline
        \vspace{1ex}
    \end{tabular}
    \caption{\label{tab:difficulties}Summary of difficulties and disqualifying traits of standardized protocols.}
    \vspace{1ex}
\end{table}

\subsection{Overhead reduction with custom communication}

By using a customized interconnect protocol, overhead can be reduced with a solution that fulfills all required properties.
For the purpose of this work, as well as the previously built BlueHive system \cite{moor12}, the BlueLink protocol was designed.
The design of the BlueLink protocol will be discussed shortly, but first we want to underline our point by comparing BlueLink to Ethernet (on a Stratix V FPGA).
Table \ref{tab:comparison} shows usage of LUTs, registers as well as memory bits.
Even though BlueLink uses less of all resources, the minimal memory footprint is notable in particular.

Looking at logic and registers, 10G BlueLink only uses 65\% compared to 10G Ethernet.
Even 40G BlueLink (using bonded lanes) compares well, using about the same area as a single 10G Ethernet MAC.
The memory footprint is only 15\% compared to 10G Ethernet.

Looking back to Table \ref{tab:specs}, BlueLink is more efficient than all standard cores that provide reliability as well as the majority of those that do not.

\begin{table}
    \centering
    \begin{tabular}{lrrr}
        \hline
        \textbf{System} & \textbf{LUTs} & \textbf{Registers} & \textbf{Memory bits} \\
        \hline
        10G BlueLink reliability layer & 1663 & 1277 & 2090 \\
        10G BlueLink link layer & 179 & 413 & 960 \\
        10G BlueLink PHY & 167 & 248 & 0 \\
        \hline
        \textbf{10G BlueLink total area} & 2009 & 1938 & 3050 \\
        \hline
        40G BlueLink reliability layer & 1965 & 1355 & 2090 \\
        40G BlueLink link layer & 1127 & 1970 & 2736 \\
        40G BlueLink PHY & 289 & 585 & 0 \\
        \hline
        \textbf{40G BlueLink total area} & 3381 & 3910 & 4826 \\
        \hline
        10G Ethernet MAC & 2986 & 3817 & 20972 \\
        10G Ethernet PHY & 100 & 94 & 0 \\
        \hline
        \textbf{10G Ethernet total area} & 3086 & 3911 & 20972 \\
        \hline
        \vspace{1ex}
    \end{tabular}
    \caption{\label{tab:comparison}Area comparison between BlueLink and Ethernet on a Stratix V FPGA, with BlueLink comparing favorably, in particular with respect to memory.}
    \vspace{1ex}
\end{table}

We included additional benchmarks (against Ethernet) for throughput, latency as well as area against bandwidth.
Throughput is measured by considering how much overhead a packet has for payloads of a specific size (in bits).
The benchmark can be seen in Figure \ref{fig:throughput}.
BlueLink focuses on small packet sizes and compares favorably for packets up to 256 bits.
The graphic also shows the overhead that is added when using IP and/or TCP over Ethernet.

\begin{figure}[ht]
    \begin{center}
        \includegraphics[width=\columnwidth]{res/bluelink-throughput.png}
        \caption{Overhead of BlueLink and Ethernet for different packet sizes (mainly small packets). Up to 256 bits, BlueLink makes considerably better use of bandwidth.}
        \label{fig:throughput}
    \end{center}
\end{figure}

For latency, two cases are considered.
In the first, the input queue is empty, while in the second the link constantly receives input as fast as it can transmit.
The tests were performed on short physical links with low error rates.
In the fully-loaded case, BlueLink has less latency still, even though it features an additional reliablity layer adding extra latency.
For light loads, flits can be accepted in a single cycle, making BlueLink's latency much lower than Ethernet's.
An important remark is that the lightly-loaded case can be made much more likely by using more transceivers.

\begin{figure}[ht]
    \begin{center}
        \includegraphics[width=.65\columnwidth]{res/bluelink-latency.png}
        \caption{Latency of BlueLink and Ethernet at 10 Gbps. BlueLink fares well for light loads, which can be upheld if many transceiver links can be used.}
        \label{fig:latency}
    \end{center}
\end{figure}

Lastly we want to consider bandwidth and area.
Area against performance is a tradeoff that occurs very commonly in FPGA designs, particularly in modern FPGAs with many transceivers.
We consider the Stratix V GX A7 FPGA, the lowest cost Stratix V that Terasic sells on evaluation boards, in a scenario in which we want to use all transceivers.
This particular FPGA offers 14.1 Gbps for each of the 48 transceivers.
Figure \ref{fig:bandwidth} shows how much area is needed for different standards against the raw bandwidth the standards can afford.
10 Gbps per lane is a limit because of commodity cabling, even though BlueLink and SerialLite III have a higher theoretical bandwidth.
BlueLink compares well as it is a lightweight protocol.

\begin{figure}[ht]
    \begin{center}
        \includegraphics[width=\columnwidth]{res/bluelink-bandwidth.png}
        \caption{Stratix V GX A7 logic utiliziation when using all transceivers with each of the protocols. Protocols are divided by reliability support, with black protocols implementing reliability, while orange protocols do not. Area data was taken from \ref{tab:specs} and \ref{tab:comparison}. Multi-lane systems have the benefit of using less area by sharing it between multiple transceivers, but require additional fitting and impose additional routing constraints on the design.}
        \label{fig:bandwidth}
    \end{center}
\end{figure}

\section{BlueLink architecture}
\label{sec:stack}

In order to have a solution that addresses all the requirements outlined in \fullref{sec:requirements}, we created BlueLink as an interconnect toolkit.
The architecture of BlueLink consists of five major layers which can be examined in Figure \ref{fig:architecture}.
It was implemented using SystemVerilog.

\begin{figure}[ht]
    \begin{center}
        \includegraphics[width=.65\columnwidth]{res/bluelink-architecture.png}
        \caption{Architecture of BlueLink interconnect, highlighting the use of transceivers and what comprises a block.}
        \label{fig:architecture}
    \end{center}
\end{figure}

To describe the architecture in a little more detail, we will subsequently describe each of the specific layers.

\subsubsection{Serial Transceiver layer}

This layer consists of a core that is already provided by FPGA manufacturers.
In order to function, two assumptions need to hold: the core implements an 8b10b coding and is configured to send and receive 32-bit words with a 4-bit k symbol indicator.

No other assumptions are made.
64b66b would also work as an alternative for 8b10b coding.
BlueLink was tested on Altera Stratix IV and Stratix V transceivers, but use of transceivers by other manufacturers should be straightforward.
We have tested SATA, PCIe, SMA, SFP+ copper and SFP+ optical cabling, but any physical medium should suffice.

Reliable data transmissions are comprised of packets that are expanded with additional information, in order to be able to provide said reliability.
The basic unit of such a transmission is a \textit{flit} with a 64-bit payload and 12-bit addressing field.
The reliability layer adds a 32-bit CRC, a sequence number and an acknowledgement field, bringing the size to 120 bits.
On the physical layer, another header with 8 bits is added for a total of 128 bits per flit.
This can and often will be conveniently split by transceivers into 4 $\times$ 32-bit words.

\subsubsection{Physical layer}

The function of this layer is to translate between a FIFO-like stream of words (by the Link layer) and a continuous stream of words for the serial transceiver.
Does alignment and symbol insertion/removal as needed.

\subsubsection{Link layer}

Translation between flits (128-bit) and words (32-bit) is the work of this layer, but also clock crossing. The main FPGA clock domain is translated to the transmit and receive clock domains of each transceiver.

\subsubsection{Reliability layer}

This layer builds reliability on top of previous primitives with ordering and back-pressure.

Reliability is achieved by using a CRC and sequence number in each flit.
Because of the high probability of false-negatives in a large cluster with billions of flits per second, the CRC consists of 32 bits.
This layer is responsible for adding these reliability parameters during transmission but will also check the CRC and sequence number when receiving.
Should this check fail, either because of the CRC or the sequence number, no acknowledgement is sent.
For pending acknowledgements with no input flits, a flit with no payload is sent.

Transmitted flits that have not yet been acknowledged are stored in a replay buffer, making this particular approach window-based.
Because a failing reliability check sends no acknowledgement, only after a timeout will the head of the replay buffer be sent again (continuously until it is acknowledged).
4-bit sequence and acknowledgement numbers lead to the replay buffer only needing to hold 8 $\times$ 64-bit flits to store a whole retransmission window.
This property is a major contributor to the low area requirements of BlueLink, compared to other protocols with longer flits/packets and larger windows.

Backpressure is realized using a flag that indicates whether additional flits can be accepted.
No further flits will be transmitted, making the block's input FIFO become full.

\subsubsection{Routing and switching layer}

Hop-by-hop routing with the ability to direct packets to a given FPGA is implemented by this layer.

\subsubsection{Application layer}

This higher level layer implements useful primitives for applications.

\section{Network semantics}
\label{sec:network}

Different cluster architectures have topological differences or requirements that define their network semantics.
In order to be as generally usable as possible, a framework might implement multiple interfaces with different network semantics in the form of virtual channels.
It was shown in the work of Narayanan et al. \cite{prit20} that multiple virtual channels can be designed that utilize an underlying packet router logic.
Multiple abstractions exist in BlueLink to enable different communication paradigms, designs and more easily support problem partitions.
To list two examples, a FIFO abstraction allows for a hardware dataflow architecture to be split across FPGA boundaries, while remote direct-memory-access (DMA) makes it possible to view partitions as nodes in a cluster-wide shared memory architecture.

\subsubsection{Bluespec FIFO}

In Bluespec SystemVerilog, FIFO abstractions are often used instead of Verilog wires (Bluespec SystemVerilog is a dataflow hardware description language).
Hardware modules can thus be easily decoupled with minimal logic overhead.
In BlueLink there is a Bluespec FIFO type available that can join two modules on different FPGAs.
The overhead is comprised of only 10-20 extra cycles of latency (compared with an on-chip FIFO).

\subsubsection{Packet routing}

Packet based network semantics are offered by BlueLink from software on custom processors.
Flit send and receive buffers are exposed to the hardware.
Using traditional polling or interrupt mechanisms, an application can be informed that a packet was delivered.

\subsubsection{Blocking reads and writes}

To reduce latency compared to polling or interrupts, blocking the application until it can successfully perform the operation makes sense when reading or writing the flit buffer.
The result is lower overhead, but it introduces the risk of deadlocks.
A special feature is using part of the address of a write to denote a target FPGA, allowing for a flit to be sent in a single clock cycle.

Successful blocking has been demonstrated in a simple example using a NIOS-II CPU executing code from DRAM located on another board.
While the link cable is unplugged, the CPU remains paused.

\subsubsection{Remote direct-memory-access (DMA)}

Remote direct-memory-access (DMA) is a useful abstraction that allows reading or writing to a specific region of memory (or memory-mapped peripheral) on a remote FPGA.
The interconnect handles translating this request to as many packets as are necessary to perform the operation and will transparently return the result as if it were a local operation.
Blocks can be transferred using burst reads and writes.
In this case, bursts are translated into a sequence of operations that is appropriate, since the application cannot reasonably be aware of the remote FPGA's memory map (such as word sizes for example).
Byte enables are used as needed when a request does not align with word boundaries.

\subsubsection{Software pipes}

Linux pipe semantics are a widespread way to handle dataflow between applications.
Using an abstraction layer, these semantics are emulated.
This has the benefit that an application can be tested on a PC using Linux pipes between processes, but later run on the cluster unchanged.

\section{Application of BlueLink}
\label{sec:application}

We already mentioned the BlueHive neural computation engine \cite{moor12}.
BlueLink was a key enabler for this project, specifically implemented on the DE4 Stratix IV 230 GX FPGA board from Terasic.
The choice stems from the fact that it maximizes the number of DDR2 memory channels.
It is also in line with our choice of utilizing commodity devices, as this device is placed in the middle of the Stratix IV range.

The board was connected using a PCB specifically designed and open-sourced \cite{pci-to-sata-breakout-board} to break out transceivers using PCI Express connectors into 6 Gbps SATA links.
Combining all this, it was possible to build a \textit{pluggable topology} using low-cost SATA cables.
Even the FPGA boards' own SATA sockets were used.

A BlueHive box consists of 16 DE4 boards and was designed with the intention of further scaling by connecting boxes using eSATA cables.
Our current focus is building enclosures for 150 FPGAs.

Finally, a portable version was built by designing a PCB that joins three FPGA boards using their PCIe 8 $\times$ connectors.
Even Stratix V FPGAs with 40 Gbps BlueLink bidirectional channels (using groups of 4 $\times$ 10 Gbps lanes) can be connected.
SFP+ cables can additionally be used to connect boards.

\begin{figure*}[ht]
    \begin{center}
        \includegraphics[width=2\columnwidth]{res/bluelink-bluehive-summary.png}
        \caption{Summary image containing the BlueHive box, a PCIe to SATA breakout board and PCB for BlueLink over PCIe (in this order).}
        \label{fig:bluehive-summary}
    \end{center}
\end{figure*}

All of these components (the box, breakout board and PCB for BlueLink over PCIe, in this order) can be viewed in Figure \ref{fig:bluehive-summary}.

The FPGAs are also used to host two custom soft vector processors driving a DDR2-800 memory channel for neural state update computations and the generation of synaptic messages.
BlueLink is used to route these messages between processors.
The system has demonstrated success in simulating two million neurons in near real-time.
As for scaling, the limit is primarily bounded by compute, indicating that bandwidth and latency have ceased becoming bottlenecks and shows that the application scales well.

\section{Related Work}
\label{sec:related}

We have already spent a great deal talking about BlueHive \cite{moor12}, which is closely related to this work.
Other related work includes other cluster designs dealing with many of the same problems, such as Bunker et al. \cite{bunk13}, who have built latency-optimized networks for FPGA clusters. Kono et al. \cite{kono12} have analyzed tightly coupled FPGA-clusters, specifically for use in lattice Boltzmann computations. A match engine for content-based image retrieval was built by Liang et al. \cite{chen13}. Mencer et al. \cite{menc09} built the cube: a 512-FPGA cluster. A more specialized platform for NoC (network-on-chip) emulation/debugging was built by Kouadri-Mostefaoui et al. \cite{koua08}. Nejad et al. \cite{neja11} consider how to partition a NoC based system into smaller sub-systems (each with their own NoC) on FPGA boards and bridging schemes at different levels of the NoC protocol stack.

\section{Conclusion}
\label{sec:conclusion}

Many lessons can be taken away from this challenge of interfacing multiple FPGAs in a system that aims to accelerate a specific problem.
We have looked at various considerations, from technical to economical factors, as well as difficulties that arise in such an undertaking.
Various properties of FPGAs that are conducive towards building a better performing interconnect stack like high-speed transceivers were also looked at.

Before we have introduced BlueLink, we extensively compared pre-existing standards and IP cores, which certainly have their benefits and appear seductively attractive at first.
However, subtle difficulties in using them as well as unfulfillable requirements or unoptimality has made us determined to build a better interconnect protocol for the specific set of requirements.
Ethernet for example is often a natural choice for networking, but imposes significant overhead and latency penalties for small messages in our FPGA application, besides taking up considerable area and lacking reliable transmission.

Besides examining protocols, a case was made for cheaper commodity FPGAs, hinting at the possibility of building a scalable but economical cluster.
Specific problems with standard cores apply particularly for cheaper commodity FPGAs, because they might not properly support a protocol, for example due to their configuration.

We have also taken an extensive look at reliability, how it influences performance and how different protocols compare in this regard across the board.
Either standardized protocols don't support reliability at all, take up too much area of the FPGA, have bandwidth limitations or are in some other way restricted.
Some of these issues are only resolved by building a reliability layer on top, effectively introducing custom communication.
Our approach shows why it is sometimes important not to simply reach for standard IP cores.

\bibliographystyle{IEEEtran}
% argument is your BibTeX string definitions and bibliography database(s)
\bibliography{bibfile}

% that's all folks
\end{document}
