\documentclass[11pt]{article}
\usepackage{graphicx}
\usepackage[margin=1in]{geometry}
\usepackage{url}
\usepackage[title,titletoc]{appendix}
\usepackage[table]{xcolor}
\usepackage[tikz]{bclogo}
\usepackage[round]{natbib}
%\usepackage{cleveref}

\usepackage{amsmath}
%\usepackage{times}
%\usepackage{wrapfig}
%\usepackage{float}
%\usepackage[colorlinks,plainpages=false]{hyperref}
%\usepackage[nonumberlist]{glossaries}
\usepackage[font=small,labelfont=bf]{caption}
%\usepackage{tikz}
%\usepackage{gantt}
%\usepackage[none]{hyphenat}
%\makeglossaries
%\loadglsentries{glossary}
\renewcommand{\appendixname}{Appendix}
\title{PDC HPC Summer Course DN2258/FDD3258 2012:\\\emph{Report}}
\author{Student: \\ \emph{Ekaterina Brocke,}\\\emph{KTH, Royal Institute of Technology, Stockholm, Sweden and }\\ \emph{National Centre for Biological Sciences, Bangalore, India}\\\\
Supervisor: \\ \emph{Mikael Djurfeldt}\\\emph{ PDC, KTH, Sweden}}
\date{\today}

\begin{document}
\maketitle
\pagebreak
\section{Introduction}
Modeling and simulations in Neuroscience is one of the applications
for High Performance Computing (HPC) systems nowadays. An efficient
use of HPC resources can confer a considerable advantage in
computational cost and enable large-scale simulations of neuronal systems.
% promote simulations within biologically
% realistic time scales.

In this project I am going to work with the MUlti SImulation
Coordinator (MUSIC) library~\citep{_mdj_2010}. In particular, to look
at communication algorithms implemented in this library, to analyze
their performance and possibly improve the scalability.
 
The MUSIC library allows large scale neuron simulators (such as NEST,
NEURON, MOOSE) to communicate during runtime. MUSIC provides an
Application Programming Interface (API) that supports the data
exchange among parallel applications in a cluster environment. The
library implements two inter-processor spike communication
algorithms. These algorithms are using the standard Message Passing
Interface (MPI), both point-to-point and collective communication
operations.

The point-to-point algorithm involves communication between two specific
processors in a communication group of applications. The
point-to-point algorithm uses blocking MPI\_Send\footnote{This routine
  does not return until the message data and envelope have been safely
  stored away so that the sender is free to access and overwrite the
  send buffer. The message might be copied directly into the matching
  receive buffer, or it might be copied into a temporary system
  buffer~\citep{_mpi2_}.}/MPI\_Recv calls in the following
communication schema (MUSIC release, 2009). A parallel application
\textit{A} communicates with a parallel application \textit{B}. Then
on each source processor ${P_{A_i}}$ for each target processor
${P_{B_j}}$ the blocking MPI\_Send will be called. On each target
processor ${P_{B_j}}$ a loop over source processors ${P_{A}}$ with the
blocking MPI\_Recv will be evoked. The sequence of operations
performed by the point-to-point algorithm during each initiated
communication is shown on Figure~\ref{fig:p2p_2009}.

On an application level of communication, efficient scheduling
(Connector:MUSIC) is achieved with the Complete Pairwise Exchange
algorithm (CPEX)~\citep{_mdj_2005,_tam_2000} (referred to as Shift
Exchange Algorithm in~\cite{_tam_2000}). The algorithm's objective is
an efficient utilization of network resources, to avoid network
stalling during the complete exchange, in particular to avoid
link/node contention that can appear when message transmissions are
not well scheduled and Head-Of-Line blocking phenomenon (HOL
blocking)\footnote{Head-Of-Line blocking is a performance-limiting
  phenomenon that occurs when a line of packets is held-up by the
  first packet. The phenomenon can have severe performance-degrading
  effects in input-buffered systems.}. It takes ${O(N)}$ time
complexity to perform the complete exchange (all-to-all communication)
operation and guarantees no deadlock.

\begin{figure*}[h!]
  \fbox{
    \begin{minipage}{0.98\textwidth}
      \centering
      \includegraphics[width=\textwidth]{figures/p2p_2009.pdf}
      \caption{Point-to-point communication algorithm implemented in
        MUSIC (2009). The sequence diagram illustrates the operations
        performed by the point-to-point algorithm during each
        initiated communication. Each processor in a communication
        domain has input, output or both types of connector entities
        depending on the specified connections between the
        applications. Here the processor \#3 communicates with the
        processor \#5. In each initiated communication for each target
        processor among which the processor \#5 is, sorted buffered
        events are sent in a loop using MPI blocking MPI\_Send
        call. The processor \#5 is waiting for the events to arrive
        from each of the source processors in a loop, among which the
        processor \#3 is. After each data package arrival, events are
        processed by calling an application handler function. When the
        events are handled from all source processors, the simulation
        continues.}
      \label{fig:p2p_2009}
    \end{minipage}
  }
\end{figure*}
  
All-to-all communication requirement causes heavy load in the
network. Thereby MUSIC was extended with a new communication algorithm
using collective MPI\_Allgather call~\citep{brocke}. The allgather method
shows good load balance among the processors and can be used as a
baseline for comparison with other communication
methods~\citep{hines_2011}. The sequence of operations performed
during each initiated communication is depicted in
Figure~\ref{fig:allgather}.

\begin{figure*}[h!]
  \fbox{
    \begin{minipage}{0.98\textwidth}
      \centering
      \includegraphics[width=0.45\textwidth]{figures/allgather.pdf}
      \caption{Collective communication algorithm implemented in
        MUSIC. The sequence diagram illustrates the operations
        performed by the collective algorithm during the initiated
        communication. Here the processors \#2 participates in a
        collective communication. In each initiated communication the
        algorithm performs two steps: the exchange of the data size
        and the data itself. Collective communication occurs so called
        ``in place'' with the output buffer being identical to the
        input buffer (by providing MPI\_IN\_PLACE argument). The
        spikes are also stored directly at the location where they are
        then processed. This reduces unnecessary memory motion by both
        the MPI implementation and by the user. Finally, when the
        MPI\_Allgatherv call is finished, finding the source objects
        is performed followed by an appropriate handler function
        call. Then, the simulation continues.}
      \label{fig:allgather}
    \end{minipage}
  }
\end{figure*}


\pagebreak
\section{Problem}
It is highly recommended to avoid having receive operations in a loop
while explicitly specifying a source processor. One of the performance
enhancements can be to use MPI\_ANY\_SOURCE that allows the message
from any source processor to be received as soon as it is arrived.

Then, non-blocking communication is potentially more efficient since
it gives an opportunity to overlap communication and
computation~\citep{_mpi2_}. In the current application it is not
always possible to overlap communication and computation since each of
the processors participating in the communication can be required to
receive and handle all the the data before it can proceed with the
computations. Thus, the usage of non-blocking routines, from one side
will introduce a synchronization overhead (i.e by calling the
MPI\_Waitall routine), but from other side can potentially influence a
network-stalling condition as the total amount of processors becomes
large.

% Collective algorithm performs communication in two steps: first it
% exchanges the collective data size and then the data itself. It
% would be interesting to look whether one step communication can
% outperform two step strategy under all-to-all communication
% requirenment on a high number of processors.

Furthermore, performance analysis are required for both communication
algorithms.

\section{Methods}
% The \emph{multiconn} branch, revision \#957 was used for all simulations. This revision is the basis for the upcoming MUSIC 1.2 release.
\subsection{Specification of supercomputers}
\begin{itemize}
\item \textbf{Lindgren}

  A Cray XE6 system (Lindgren) is located in the Center for
  High-Performance Computing (PDC) at the KTH Royal Institute of
  Technology in Stockholm, Sweden. The Cray XE6 system is
  a distributed-memory supercomputer. It has 16 racks with 96 compute
  nodes each. Each compute node consists of two AMD Opteron 12-core
  2.1~Ghz "Magny-Cours" processors, 32~GiB DDR3 and a shared 12~MiB L3
  cache. It uses a high speed network, Cray Gemini interconnect
  technology. The total theoretical peak performance is
  305.6~Tflops. More detailed specification of the Cray XE6 system can
  be found here:
  \url{http://www.pdc.kth.se/resources/computers/lindgren/hardware}.
  % \item \textbf{BABEL}

  %   Some figures were obtained running simulations on an IBM Blue
  %   Gene/P system (BABEL). The system located at the Institute for
  %   Development and Resources in Intensive Scientific Computing
  %   (IDRIS) in Orsay, France. The Blue Gene/P system is a
  %   distributed-memory supercomputer. It has 10 racks with 1024
  %   compute nodes each. Each compute node consists of four 32-bit
  %   850~MHz IBM PowerPC 450 microprocessors, 2~GiB of RAM, six
  %   connections to the torus network at 3.4~Gbps per link, and a
  %   shared 8~MiB L3 cache. The total theoretical peak performance is
  %   139~Tflops (3.4~Gflops by core). More detailed specification of
  %   the IBM Blue Gene/P machine can be found here:
  %   \url{http://www.idris.fr/su/Scalaire/babel/ibm_pdfs/bg_appli_sg247287-1.pdf}.

\item \textbf{Shaheen}

  An IBM Blue Gene/P system (Shaheen) is located in KAUST
  Supercomputing Laboratory (KSL) at King Abdullah University of
  Science \& Technology in Thuwal, Saudi Arabia. This system has 16
  racks with 1024 compute nodes each. Each compute node consists of
  four 32-bit 850~MHz IBM PowerPC 450 microprocessors, 4~GiB of
  RAM. The total theoretical peak performance is 222~Teraflops. More
  detailed specification of this IBM Blue Gene/P machine can be found
  here: \url{http://ksl.kaust.edu.sa/Pages/Shaheen.aspx}.

\item \textbf{JUQUEEN}

  An IBM Blue Gene/Q system (JUQUEEN) is located in J\"{u}lich
  Supercomputing Centre (JSC) at J\"{u}lich Forshungszentrum in
  J\"{u}lich, Germany. This system has 28 racks with 1024 compute
  nodes each. Each compute node consists of sixteen IBM
  PowerPC\textsuperscript{\textregistered}~A2, 1.6 GHz cores, 16~GiB
  of RAM. An overall peak performance is 5.9~Petaflops. More
  detailed specification of this system can be found
  here: \url{http://www.fz-juelich.de/ias/jsc/EN/Expertise/Supercomputers/JUQUEEN/JUQUEEN_node.html}.


\end{itemize}
\subsection{Point-to-point algorithm modifications}
\label{subsec:comm_algo}
\subsubsection{Modification \# 1: MPI\_ANY\_SOURCE}
Figure~\ref{fig:anysource} illustrates a sequence of operations
performed during each initiated communication in the proposed
modification \#1 of the point-to-point communication schema. Each
processor in a communication domain has input, output or both types of
connector entities depending on the specified connections between the
applications. Here processor \#3 communicates with processor
\#5. In each initiated communication for each target processor among
which is the processor \#5, sorted buffered events are sent using the MPI
blocking MPI\_Send call. Processor \#5 is waiting for the events
to arrive from any possible source processor (using MPI\_ANY\_SOURCE
tag), among which is processor \#3. After each data package
arrival, events are processed by calling an application handler
function. When the events from all source processors have been handled, the
simulation continues.

\subsubsection{Modification \# 2: MPI\_Isend+MPI\_ANY\_SOURCE}
Figure~\ref{fig:isend_anysource} illustrates a sequence of operations
performed during each initiated communication in the proposed
modification \#2 of the point-to-point communication schema. Each
processor in a communication domain has input, output or both types of
connector entities depending on the specified connections between the
applications. Here processor \#3 communicates with processor \#5. In
each initiated communication for each target processor among which is
the processor \#5, sorted buffered events are sent using the MPI
non-blocking MPI\_Isend call. Finally, MPI\_Waitall is called to wait
until the data has been safely handled by MPI and the simulation
continues. Processor \#5 is waiting for the events to arrive from any
possible source processor by calling MPI\_Recv with MPI\_ANY\_SOURCE
tag (the processor \#3 is among them). After each data package
arrival, events are processed by calling an application handler
function. When the events from all source processors have been
handled, the simulation continues.

\subsubsection{Modification \# 3: MPI\_Isend+MPI\_Irecv}
Figure~\ref{fig:isend_irecv} illustrates a sequence of operations
performed during each initiated communication in the proposed
modification \#3 of the point-to-point communication schema. Each
processor in a communication domain has input, output or both types of
connector entities depending on the specified connections between the
applications. Here processor \#3 has both input and output
connectors. In each initiated communication for each target processor
sorted buffered events are sent using the MPI non-blocking MPI\_Isend
call. Then it notifies that it is ready to receive events from any
possible source by calling non-blocking MPI\_Irecv with
MPI\_ANY\_SOURCE tag. Then it waits until any of the expected packages
arrive (MPI\_Waitany). After each data package arrival, events are
processed by calling an application handler function. When the events
from all source processors have been handled, MPI\_Waitall is called
to complete non-blocking MPI\_Isend operations. The simulation
continues.

\begin{figure}
  \fbox{
    \begin{minipage}{0.98\textwidth}
      \centering
      \includegraphics[width=\textwidth]{figures/p2p_anysource.pdf}
      \caption{Sequence diagram of the modified point-to-point
        communication schema (Modification \#1).}
      \label{fig:anysource}
      \includegraphics[width=\textwidth]{figures/p2p_isend_anysource.pdf}
      \caption{Sequence diagram of the modified point-to-point
        communication schema (Modification \#2). }
      \label{fig:isend_anysource}
      
      \includegraphics[width=0.8\textwidth]{figures/p2p_isend_irecv.pdf}
      \caption{Sequence diagram of the modified point-to-point
        communication schema (Modification \#3).}
      \label{fig:isend_irecv}
    \end{minipage}
  }
\end{figure}
\pagebreak
% \subsubsection{Modification \# 4: single allgather call}
% \pagebreak
\subsection{The benchmark}
%\subsubsection{Artificial spiking cell network model}
\label{sec:ascnm}
To test MUSIC scalability we chose an Artificial Spiking Cell Network
model (ASCN model) first introduced in \cite{kumar_2010} and then used
by \cite{hines_2011}. It was designed to focus on communication
performance by minimizing computation time of each artificial spiking
cell. It was formulated using NEURON's programming language and used
to compare several neuronal spike exchange methods within the NEURON
neurosimulator on a Blue Gene/P supercomputer.

The model is reimplemented in C++ in order to use it in a test
benchmark to study scaling behavior of the MUSIC library. In this
benchmark we use two instances of the network model. These instances
were connected into a co-simulation (referred as multi-simulation in
\cite{_mdj_2010}) using either MUSIC (see Figure~\ref{fig:splitkumar})
or an MPI communication interfaces (see Appendix~\ref{appx:C}). The
benchmark is available from the
\url{https://hines2011.googlecode.com/}.

The network model has ${N = 2~\mathrm{M}}$ artificial cells
(${\mathrm{K} = 1024}$, ${\mathrm{M} = \mathrm{K}^2}$, ${\mathrm{k} =
  1000)}$ and approximately ${C = 1~\mathrm{k}}$ connections per
cell. The number of connections is given by the uniform random
distribution ranging from ${C\pm\Delta{\mathrm{C}}, \mathrm{\Delta{C}}
  = 50}$. Cells are distributed by equal portions of sequential GIDs
between the processors (referred as ``consecutive'' distribution in
\cite{hines_2011}). % The second
% network configuration has ${N = 1/4~\mathrm{M}}$ artificial cells
% and approximately ${C = 10~\mathrm{k}}$ connections per cell.
Each artificial cell fires with an average rate F = 30~Hz. The
connection delay is set to 1~ms. Simulation is performed for 200
biological ms (bioms).

\begin{figure}
  \fbox{
    \begin{minipage}{0.98\textwidth}
      \centering

  \includegraphics[width=0.5\linewidth]{figures/split_kumar3}
  \caption{ Co-simulation of the artificial
    spiking cell network model via MUSIC. In the co-simulation spikes
    are communicated between two network populations of 2~M cells (in
    total 4~M cells accordingly) during the runtime.}
  \label{fig:splitkumar}
    \end{minipage}
  }
\end{figure}

Given the network and benchmarking configuration as described above,
approximately 1.920 MiB of information has to be exchanged each
communication step (after the network has been stabilized). This value is
calculated as ${N*2*\mathrm{F}/1000*\mathrm{A}*\mathrm{S}}$, where A =
1~ms is a maximal acceptable latency, S = 16~bytes is the size of the
event, viz. a spike.

% \begin{center}
%   \begin{tabular}{|l|l|l|}
%     \hline
%     128 & 256 & 512 & 1024 & 2048\\
%     \hline
%   \end{tabular}
%   \captionof{table}{This table shows some data}
%   \label{tab:a}
% \end{center}



% The number of cells per processor and the number of connections per
% cell were kept the same as in \citet{hines_2011}, in order to obtain
% a similar computational cost. Thus, in total, we use twice the
% amount of cells distributed among twice the amount of processors for
% each simulation (Figure~\ref{fig:splitkumar}). In this co-simulation
% spikes are communicated between two network populations of 2~M or
% 1/4~M cells (in total 4~M or 1/2~M cells accordingly) during the
% runtime. Each artificial cell from one population has ${C}$ input
% connections from another population and vice versa, where value
% ${C}$ is defined according to the number of cells used (see
% above). There are no connections within one population, i.e. the
% whole communication is handled by MUSIC. We will refer to this model
% as the artificial spiking cell network model.


\section{Results}
\subsection{Comparison of the proposed modifications}
Figures~\ref{fig:ln} and~\ref{fig:jq_high} show the performance of the
original pairwise communication schema implemented in MUSIC release
2009 and its modifications (Modifications \#1, Modification \#2 and
Modification \#3) on high number of processors. Both Modifications \#1
and Modification \#2 result in smaller absolute run times in
comparison with the communication schema implemented in MUSIC released
2009. Though due to the communication overheads on high number of
processors none of the algorithms show linear speed-up on the given
range of
processors. % However none of the presented schemas terminated
% sucsessfully on the 32~K number of processors (the highest
% tested). The error message was the following:
% \begin{bclogo}[logo=, barre=none]{}
%   FE\_MPI (WARN) : SignalHandler() -  \\
%   BE\_MPI (WARN) : Received a message from frontend\\
%   BE\_MPI (WARN) : Execution of the current command interrupted\\
%   BE\_MPI (ERROR): The error message in the job record is as follows:\\
%   BE\_MPI (ERROR):   "killed with signal 9"\\
%   FE\_MPI (ERROR):  Failure list:\\
%   FE\_MPI (ERROR):   - 1. Execution interrupted by signal (failure \#71)\\
% \end{bclogo}\\
% This error indicates that the job was killed (kill -2 (SIGINT))
% during the execution.
\begin{figure}
  \fbox{
    \begin{minipage}{0.98\textwidth}
      \centering

      \includegraphics[width=0.73\textwidth]{figures/lindgren_modifications.png}
      \caption[Caption for LOF]{Strong scaling performance for the
        benchmark on the Lindgren machine. Each data point was taken
        using 24 tasks per node. The MUSIC~(2009) curve corresponds to
        the performance measured using MUSIC revision \#957 but with
        the communication schema identical to the one used in the
        MUSIC 2009 release\footnotemark. Modifications \#1 and \#2
        were used with the MUSIC revision \#957, Modification \#3 was
        used with the MUSIC revision \#956, benchmark revision \#3.}
      \label{fig:ln}
      \footnotetext{There are few reasons why the latest MUSIC
        revision was used. First is that there were a lot of changes
        made in the code since the last release and it is fair to
        compare with the same revision. Second, it was impossible to
        compare with the MUSIC release 2009 due to the bug in the
        scheduling algorithm.}.

      \includegraphics[width=0.73\textwidth]{figures/juqueen_modifications.png}
      \caption{Strong scaling performance for the benchmark on the
        JUQUEEN machine. Each data point was taken using 16 tasks per
        node. The same revisions were used as for the
        Figure${~\ref{fig:ln}}$.}
      \label{fig:jq_high}
    \end{minipage}
  }

\end{figure}


The difference in scaling behavior between the two supercomputers can
be attributed to the difference in the interconnect technologies. The
Cray XE6 blade has two Gemini interconnect Application-Specific
Integrated Circuits (ASICs) with four two-socket server nodes (each
node has two AMD Opteron 12-core 2.1~Ghz "Magny-Cours" processors). On
the Blue Gene/Q machine each node (16 user processors based on the IBM
PowerPC\textsuperscript{\textregistered} A2 1.6~GHz processor core)
consists of a single ASIC that locates the chip-to-chip communication
(network) logic. Also, the network topology for Blue Gene/Q is a
five-dimensional (5D) torus while for Cray XE6 it is three-dimensional
(3D) torus. To test whether the absolute run times differ due to the
amount of tasks per ASIC a few control points can be taken specifying
the number of MPI tasks per Gemini equal to the number of MPI tasks
used per node on JUQUEEN \mbox{(\it{\#PBS -l mppnppn=8})}.

Modification~\#3 scales fairly bad on high number of processors though
it shows the smallest run time for 1~K number of processors
(Figure~\ref{fig:jq_high}). It can be explained by the relatively high
cost of the complete operations for a small size message exchange. In
order to check the hypothesis the benchmark performance on the range
of processors when the message size is in the range 1.92~KiB -
7.68~KiB (see Appendix~\ref{appx:B}) was measured comparing
Modifications \#2 and \#3 (Figure~\ref{fig:jq_low}).


% While the message size remains large enough the synchronization cost
% is imperceptible in respect to the communication cost. While the
% message size becomes small the wait operations are becoming
% unfavorable time wise.

\begin{center}
  \includegraphics[width=0.8\textwidth]{figures/juqueen_modifications_low.png}
  \captionof{figure}{Strong scaling performance for the benchmark on
    the JUQUEEN machine. Each data point was taken using 8 tasks per
    node. Modification \#2 was used with the MUSIC revision \#961,
    Modification \#3 was used with the MUSIC revision \#956, benchmark
    revision was \#6.}
  \label{fig:jq_low}
\end{center}

\subsection{Computation versus communication on high number of
  processors}
In order to get a better understanding of the scaling behavior on a
high number of processors we look at computation times in each
integration interval. The computation time is measured as a total run
time excluding communication time in each integration
interval. Communication time is measured as the sum of the time
measures of each MPI call used in the communication schema.  In our
analysis we only consider the collective algorithm. For the collective
algorithm computation time includes finding the correct source object,
handling and issuing a spike event.

Figure~\ref{fig:allgather_lb} shows the mean and the maximum total
times as well as the mean and the maximum computation times among the
processors for each integration interval during the 200~ms
simulation. The shape of the curves corresponds to the amount of
spikes generated during each integration interval (see
Figure~\ref{fig:music_spikes}). As described in
Section~\ref{sec:ascnm}, the cell spiking behavior is defined
according to the 20-40~ms uniform random interval distribution. And it
takes few 20~ms cycles before the frequency
stabilizes~\citep{hines_2011}.

\begin{figure}
  \fbox{
    \begin{minipage}{0.98\textwidth}
      \centering
      \includegraphics[width=\textwidth]{figures/allgather_lb_1K8K.png}
      \caption{Per interval performance for the MUSIC collective
        algorithm on the Schaheen machine. The top plot corresponds to
        the performance measured on 1~K number of processors, the
        bottom - 8~K number of processors. On each plot, the upper
        curves correspond to the mean and the maximum total times, the
        lower - the mean and the maximum computation times. The curves
        in red are the maximum times, in black - the mean times.}
      \label{fig:allgather_lb}
    \end{minipage}
  }

\end{figure}
\begin{figure}
  \fbox{
    \begin{minipage}{0.98\textwidth}
      \centering
      \includegraphics[width=\textwidth]{figures/music_spikes_1K8K.png}
      \caption{Fraction of generated and handled spikes per interval by
        processor \#0. 
        % in the benchmark described in the
        % Section~\ref{sec:ascnm}. 
        The fraction is calculated as the amount of spikes from the
        maximum possible generated or handled spikes on the
        processor~\#0. The top plot corresponds to the fraction of
        spikes on the 1~K number of processors, the bottom - 8~K
        number of processors. The blue curve corresponds to the
        fraction of generated spikes, the green - handled incoming
        spikes per interval.}
      \label{fig:music_spikes}

    \end{minipage}
  }
\end{figure}

First, notice as the number of processors increase the computation
time per interval is obviously decreasing while the total times do not
show an obvious change. It can be attributed to the non-scaling nature
of the blocking MPI\_Allgather call. As the number of processors in a
communication group increases, the communication cost increases while
the total amount of exchanged information is preserved.

Second, notice the both the total and computation times follow almost
identical curves regarding the overall shape and the noise degree. As
mentioned above, overall shape is governed by the cell firing
behavior. The noise degree can be attributed to the variation of
generated spikes in the adjacent intervals per processor as shown on
Figure~\ref{fig:music_spikes}. Though the variation of generated
spikes increase with the number of processors the computational cost
caused by spike generation becomes insignificant in comparison with
general MUSIC computation overhead per process that seems to smooth
the computation curve on 8~K number of processors.

The artifacts (sharp peaks) appearing on both the total time and the
computation time curves on the 8~K number of processors have hardware
nature and can not be attributed to the dynamics of the model or
general MUSIC computation overhead. We do not observe these artifacts
neither in the variation of input spikes handled by different
processors nor in the variation of generated spikes per process (data
now shown).

Finally, notice the difference between the maximum total and the mean
times among the processors increases (see
Figure~\ref{fig:allgather_lb}, upper curves) while the maximum and the
mean of the computation times are apparently have imperceptible
difference (see Figure~\ref{fig:allgather_lb}, lower curves,
Figure~\ref{fig:allgather_lb_zoomed}). In order to test whether this
imbalance can be caused by the variance in the computation times among
the processors we use the ASCN model with the MPI communication
interface (see Appendix~\ref{appx:C},
\textit{CollectiveManager}). This benchmark excludes any computational
overhead introduced by the MUSIC library and also post-processing of
the arriving events leaving only minimum computational load to
maintain the dynamics of the
model. Figure~\ref{fig:mmusic_allgather_lb} does not show a
perceptible difference between the maximum total and the mean times
among the processors. Thus, we conclude, though an observed imbalance
is quite insignificant and does not really contribute to the total run
times, the load imbalance between the processors could play a crucial
role for an overall performance while using collective operations on a
high number of processors.
\begin{figure}
  \fbox{
    \begin{minipage}{0.98\textwidth}
      \centering
    
      \includegraphics[width=\textwidth]{figures/allgather_lb_1K8K_zoomed.png}
      \caption{Per interval computation times measured in the MUSIC
        collective algorithm on the Schaheen machine. (The same as on
        the Figure~\ref{fig:allgather_lb} zoomed in for the
        computation curve). The top plot corresponds to the
        computation times per interval measured on the 1~K number of
        processors, the bottom - 8~K number of processors. On each
        plot, the curves in red are the maximum times, in black the
        mean times.}
      \label{fig:allgather_lb_zoomed}

      \includegraphics[width=\textwidth]{figures/mmusic_allgather_lb_8K.png}
      \caption{Per interval performance for the collective algorithm
        implemented in the MPI based interface. The data was obtained
        from the Schaheen machine using 8~K number of processors. The
        highest curves correspond to the mean and the maximum total
        times, the lowest - the mean and the maximum computation
        times. The curves in red are the maximum times, in black - the
        mean times.}
      \label{fig:mmusic_allgather_lb}

    \end{minipage}
  }
\end{figure}

  
 
% \subsection{Postponing3 vs Postponing4}
% \begin{itemize}
% \item Given Tobias test model, Postponing4 outperforms Postponing3
%   scheduling algorithm (Shaheen machine, r940 vs r956). Given Kumar
%   benchmark, the advantage is not obvious. why?  It seems like
%   postponing4 gives slightly higher efficiency but worse scaling. I
%   guess the effect on scaling could be due to communication load
%   being more spread out in time in r940 due to the alternating
%   behavior, or what do you say?

%   But maybe this is an artefact of our computational load being low?
%   Because in r940 computation on the two sides will be arrange
%   serially in relation to eachother and in r956 they will be done in
%   parallel.  If this is true, the prediction is that r956 will make
%   a larger improvement to execution time for the NEST benchmark than
%   in the Kumar benchmark.  (The same effect should be possible to
%   see by increasing the computational load in Kumar by the right
%   magnitude, but you don't need to do that.)
% \item Introducing a computational load (by means of sleep function)
%   in the artificialy cell spiking model (kumar) and maintaining
%   Tobias parameters (like time step, number of cells and connections
%   per cell going though MUSIC), the difference between these two
%   algorithms can be replicated.

% \end{itemize}
% \subsubsection{Modification \#4}
\pagebreak
\section{Conclusions}
During this project three modification for the point-to-point
communication algorithm were proposed. All three of the proposed
modifications show an obvious advantage both in scaling and overall
total run times over pairwise communication as implemented in the
MUSIC 2009 release. Though Modification \#1 and Modification \#2 are
more robust, i.e they show good absolute run times independently of
the number of processors and the machine, Modification \#3 can
outperform the first two when the message size is large (the order of
a KiB and larger).

The ASCN model (see Section~\ref{sec:ascnm}) was used to perform
benchmarking of MUSIC. Also, an MPI based communication interface (see
Appendix~\ref{appx:C}) was developed and used to perform load
balancing analysis of the collective algorithm during the project. Due
to the simplicity of the MPI based communication interface, it can be
easily extended with new variations of communication algorithms and be
used in the performance analysis further on.

Collective algorithm performance analysis shows an obvious non-scaling
nature of the MPI\_Allgather call. Besides, a small variance in the
computation times among the processors can become very crucial for the
overall performance on a high number of processors (8~K number
processors is shown). Whether this variation can be compensated by an
extra computational load requires further analysis.
 
An analysis of the benchmark was presented in Appendices~\ref{appx:A}
and~\ref{appx:B} addressing the questions of the connectivity between
the processors and the amount of exchanged information for given model
parameters. The amount of spikes to be sent to each of the processors
during a complete pairwise communication is equal to the amount of
spikes generated on each while running the benchmark on up to 512
number of processors. However this value becomes ~8 times smaller on
8~K number of processors. Together with the fact that sorting of
spikes is performed on the sender side this can signify the advantage
of the point-to-point algorithms on a high number of processors.

The recommendation for a future project could be an investigation of
multiple step pairwise communication methods and the use of low level
interfaces to hardware like System Programming Interface on Blue
Gene/Q (SPI). The latter approach is highly recommended for high
frequency small message communication on a high number of processors.

\clearpage

\begin{appendices}
  % \crefalias{section}{appsec}
  \section{How to calculate the probability of all-to-all
    communication on a given number of processors}
  % \subsection{Theory}
  \label{appx:A}
  Given a "consecutive" distribution of ${N}$ cells between ${L}$
  processors we would like to calculate the probability of having at
  least one source cell from each of the processors given ${T}$ number
  of synapses per cell. This problem can be formulated in terms of
  colored balls (each processor has its own color) and an urn (the
  population of cells). So, we would like to draw at least one ball of
  each color after ${T}$ trials.

  The usual way to solve such problems is to use enumerative
  combinatorics. However having the numbers of the order ${10^6}$ and
  larger makes the task non-trivial. Thus different approximation
  techniques and probabalistic theory is normally used to calculate a
  solution.

  For the initial estimates of the expected number of drawings after
  which we draw all colors we can use the "coupon-collector's problem"
  approximation. Then expected number of tries ${E}$ is given by:

\begin{equation}
  E=L*log(L)+\gamma*L+\frac{1}{2}+o(1),~\mathrm{as}~L\to\infty,
\end{equation}
where ${\gamma\approx0.5772156649}$ is the Euler-Mascheroni constant.

We can also apply a probabilistic theory to give an approximated
numbers for given numbers.
% prob4
\begin{enumerate}
\item Let $\frac{N_i}{N}$ be a probability to draw a ball that has an
  ${i}$ color, where $N_{i}$ is the number of balls that have the same
  color ${i}$.
\item Then $(1-\frac{N_i}{N})$ is a probability of not drawing any
  ball with the color ${i}$.
\item $(1-\frac{N_i}{N})^T$ is a probability of not drawing any ball
  with the color ${i}$ after ${T}$ trials. To a good approximation we
  assume that the balls are returned to the urn.
\item Then, $(1-(1-\frac{N_i}{N})^T)$ is a probability of picking at
  least one ball with the color ${i}$ after ${T}$ trials.
\end{enumerate}
% prob1
% \begin{enumerate}
% \item Let ${p_i}$ be a probability of picking a specific point that
%   belongs to a cluster ${i}$ within ${T}$ samples.
% \item Then $({1-p_i})$ is a probability of not picking this point
%   within ${T}$ samples.
% \item ${(1-p_i)^{N_{i}}}$ is a probability of not picking any point
%   within ${T}$ samples in a cluster ${i}$, where $N_{i}$ is the
%   number of points in the cluster ${i}$.
% \item $({1-(1-p_i)^{N_{i}}})$ is a probability of picking at least
%   one point within ${T}$ samples in a cluster ${i}$.
% \item $\prod\limits_{i=1}^{L}({1-(1-p_i)^{N_{i}}})$ is a probability
%   of picking at least one point within ${T}$ samples in each
%   cluster, where ${L}$ is the total amount of clusters.
% \end{enumerate}

% The probability ${p_i}$ of picking a specific point that belongs to
% a cluster ${i}$ within ${T}$ samples can be defined as:
% \begin{equation}\label{eq:pp}
% p_{i}=1-(1-\frac{1}{N})^T.
% \end{equation} 
If we assume the colors are independent, the probability that we will
draw at least one ball of each color after ${T}$ trials can be
calculated as:
\begin{center}
  \begin{equation}
    % \fbox{
    % P=\prod\limits_{i=1}^{L}\left(1-(1-\frac{1}{N})^{N_{i}*T}\right).
    P=\prod\limits_{i=1}^{L}\left(1-(1-\frac{N_i}{N})^T\right).
    % }
  \end{equation}
\end{center}

% \subsection{Example}
Given ${N}$=2~M, $C$=1000, ${T=N_{i}*C}$, ${N_{i}=\frac{N}{L}}$, where
${L}$ is in the range $[128,16~K]$ number of processors, we can expect
all-to-all communication having up to 8~K number of processors (16~K
number of processors in total for a given benchmark
(Section~\ref{sec:ascnm})).

\begin{center}
  \begin{tabular*}{0.98\textwidth}{@{\extracolsep{\fill} } | r || c |
      c | c | c | c | c | c | c | }
    \hline
    \cellcolor[gray]{0.9} ${L}$ & 128 & 256 & 512 & 1~K & 2~K & 4~K & 8~K & 16~K\\ \hline
    \cellcolor[gray]{0.9} ${P}$ & 1 & 1 & 1 & 1 & 1 & 1 & 0.9999999998 & 0.00132 \\ \hline
    \cellcolor[gray]{0.9} ${T/E}$ & 23 559.07 & 5 225.05 & 1 173.62 & 266.34 & 60.96 & 14.05 & 3.26 & 0.76 \\ \hline

  \end{tabular*}
  \captionof{table}{The probability ${P}$ and the fraction of used
    tries from the "expected" ${E}$ to form all-to-all connectivity
    between the processors depending on the number of processors
    ${L}$.}
  \label{tab:b}
\end{center}

\clearpage
\section { How to calculate the frequency of communicating
  information}
% \crefalias{section}{appsec}
\label{appx:B}
% \subsection{Theory}
We have ${N}$ cells distributed between ${L}$ processors. The
all-to-all connectivity between the processors is defined by the
connectivity between the cells. Let ${F}$ be an average frequency
(bytes per biological second (bios)) of producing information by each
of the cells. We want to calculate the frequency of the communicating
information by each of the
processors. % Though we are interested in calculating the frequency of the communicating information for each of the clusters, the connectivity is defined between the points (see Section~\ref{subsec:ascnm}).

Let ${p_i}$ be a probability of picking a cell from the processor
${i}$. Then ${p_i}=\frac{N_i}{N}$, where ${N_i}$ is the number of
cells located on the processor ${i}$. Given a binomial distribution
with $n=T$ (see Appendix~\ref{appx:A}) and $p={p_i}$, the average
number of sampled cells from the processor ${i}$ can be defined as
$\mu=min(T*p_i,N_i)$. Thus, a mean frequency of the communicating
information ${W_i}$ can be calculated as:
\begin{equation}
  W_i=F*L*min\left(T*\frac{N_i}{N},N_i\right).
\end{equation}
% Given the probability of picking a specific point that belongs to a
% cluster ${i}$ within ${T}$ samples by the equation~\eqref{eq:pp}
% (see~Appendix~\ref{subsec:AA}), we can define ${p_{k_i}}$ as a joint
% probability:
% \begin{equation}
% p_{k_i}=\left(1-(1-\frac{1}{N})^T\right)^k.
% \end{equation}

% Then, the frequency of the communicating information of a cluster
% ${i}$ can be defined as:
% \begin{equation} \label{eq:frq}
%   W_i=L*F*\sum_{k=1}^{N_i}p_{k_i}=L*F*\sum_{k=1}^{N_i}\left(1-(1-\frac{1}{N})^T\right)^k,
% \end{equation}
% where ${N_i}$ is the number of points in a cluster ${i}$.

% Though we are interested in calculating the frequency of the
% communicating information for each of the clusters, the connectivity
% is formed between the points.  Considering \eqref{eq:frq}, the
% following statements are true:
% \begin{itemize}
% \item if the points have all-to-all connectivity then the cluster
%   ${i}$ will have $W_i=L*F*N_i$ bytes per bios to communicate;
% \item if all the points belonging to the same cluster form only one
%   connectivity with any of the points of cluster ${i}$, then
%   ${W_i=L*F}$;
% \item ${\{0, \ldots, N-1\} \times \{0, \ldots, N-1\} \cap \Omega}$
% \end{itemize}
% \subsection{Example}
Given ${N}$=2~M, $C$=1000, $\mathrm{F}=30*16$ (average firing
frequency multiplied by the size of the spike in bytes),
${T=N_{i}*C}$, ${N_{i}=\frac{N}{L}}$, where ${L}$ is in the range
$[128,8~K]$ number of processors, we can calculate the frequency of
the communicating information on each of the
processors. Table~\ref{tab:b2} shows the amount of cells located on
each processor ${N_i}$, the amount of information communicated by each
processor ${W_i}$, amount of information produced on each processor
$({N_i*\mathrm{F}})$ depending on the number of processors ${L}$.
\begin{center}
  \begin{tabular*}{0.98\textwidth}{@{\extracolsep{\fill} } | r || c |
      c | c | c | c | c | c | c | }
    \hline
    \cellcolor[gray]{0.9} $L$ & 128 & 256 & 512 & 1~K & 2~K & 4~K & 8~K \\ \hline
    \cellcolor[gray]{0.9} ${Ni}$ & 16 384 & 8 192 & 4 096 & 2 048 & 1 024 & 512 & 256  \\ \hline
    % \cellcolor[gray]{0.9} ${T}$ & 16 384 000 & 8 192 000 & 4 096 000
    % & 2 048 000 & 1 024 000 & 512 000 & 256 000 \\ \hline

    % \cellcolor[gray]{0.9} $W_i~\mathrm{[KiB/bios]}$ & 148 022.9 & 5
    % 845.5 & 1 452.2 & 794.6 & 604.3 & 530.9 & 498.6 \\ \hline
    % \cellcolor[gray]{0.9} $W_i~\mathrm{[KiB/bios]}$ & 183.66 &
    % 298.59 & 487.32 & 960.92 & 1 194.38 & 1 978.59 & 3 854.65 \\
    % \hline
    \cellcolor[gray]{0.9} $W_i\mathrm{[KiB/bios]}$ & 983 040 & 983 040 & 983 040 & 960 000 & 480 000 & 240 000 & 120 000  \\ \hline
 
    % \cellcolor[gray]{0.9} $L*\mathrm{F~[KiB/bios]}$ & 60.0 & 120.0 &
    % 240.0 & 480.0 & 960.0 & 1 920.0 & 3 840.0 \\ \hline
    \cellcolor[gray]{0.9} $N_i*\mathrm{F~[KiB/bios]}$ & 7 680 & 3 840 & 1 920 & 960 & 480 & 240 & 120 \\ \hline
  \end{tabular*}
  \captionof{table}{The number of cells per processor ${N_i}$, the
    mean frequency of the communication information by each of the
    processors ${W_i}$ and the frequency of the information produced
    by each of the processors ${N_i*F}$ from the total number of
    all-to-all communicating processors ${L}$.}
  \label{tab:b2}
\end{center}

Note, for the 128, the 256 and the 512 number of processors the
frequency of communicating information is equal
${N_i*\mathrm{F}*L}$. This is due to the "full" sampling on each of
the processors. Though the frequency of communicating information
remains on a given range of processors, the amount of messages and the
size is different. While the number of processors increase the amount
of messages increase (due to the number of processors to communicate
with ${L}$) and the size of the message decrease (due to the amount of
produced information ${N_i*\mathrm{F}}$).  \clearpage
\section{MPI based interface for the ASCN model }\label{appx:C}
An MPI based interface for the ASCN model was developed for the
purpose of experiments and in order to have a flexibility over
communication and computational phases of the benchmark model. It
consists of a class hierarchy with according inheritance relationship
between the base abstract class \textbf{CommManager} and the
successors that realize different communication schema
(Figure~\ref{fig:kumar_mpi}). The source code is available from
\url{https://hines2011.googlecode.com/trunc/kumar_sim_mpi}.
\begin{center}
  \includegraphics[width=0.95\textwidth]{figures/kumar_mpi.pdf}
  \captionof{figure}{Class diagram of the communication algorithms
    provided by the MPI based communication interface for the ASCN
    model.\\}
  \label{fig:kumar_mpi}
\end{center}


\begin{description}
\item[CollectiveManager (algo = 0)]\hfill

  This algorithm implements collective communication based on two
  calls of the MPI\_Allgather(v) routine. It does not include
  post-processing of the received data but provides an opportunity to
  introduce delays between the communication steps. There are either
  fixed delays ({\scriptsize\#define CDELAY}) or uniformly distributed
  delays ({\scriptsize\#define RCDELAY}) based on the expected
  values. Both types of delays are dependent on the amount of exchange
  data and the number of processors in a communication group. It also
  allows to record and save to a file ("out\_spikes") the amount of
  generated spikes by each of the processors during each
  intercommunication interval ({\scriptsize\#define
    OUT\_SPIKES\_COUNT}).
\item[CollectiveManagerL (algo = 1)]\hfill

  This algorithm implements collective communication based on two
  MPI\_Allgather(v) calls with post-processing of the received
  data. It also allows to record and save to a file ("in\_spikes") the
  amount of received spikes by each of the processors in each
  communication step ({\scriptsize\#define IN\_SPIKES\_COUNT}).

\item[P2PManager (algo = 2)]\hfill

  This algorithm implements pairwise communication schema as shown on
  Figure~\ref{fig:isend_anysource} without post-processing of the
  received data.
\item[P2PSsend (algo = 3)]\hfill

  This algorithm implements pairwise communication similar to the
  schema shown on Figure~\ref{fig:anysource}. It uses synchronous
  MPI\_Ssend call instead of blocking MPI\_Send and it does not
  include post-processing of the received data.
\item[P2PManagerL (algo = 4)]\hfill

  This algorithm implements pairwise communication schema as shown on
  Figure~\ref{fig:isend_anysource} with post-processing of the
  received data.
\item[P2PSend (algo = 5)]\hfill

  This algorithm implements pairwise communication schema as shown on
  Figure~\ref{fig:anysource}. It does not include post-processing of
  the received data.
\item[P2PSendLoop (algo = 6)]\hfill

  This algorithm implements pairwise communication schema as shown on
  Figure~\ref{fig:p2p_2009}. It does not include post-processing of
  the received data.
\item[CollectiveManagerFixedBuf (algo = 7)]\hfill

  This algorithm implements collective communication based on three
  possible MPI\_Allgather(v) calls. During the first call fixed buffer
  size data is exchanged (as discussed in~\cite{hines_2011}). During
  other two calls the possible remaining data is exchanged. It does
  not include post-processing of the received data.
\item[CollectiveManagerLoop (algo = 8)]\hfill

  This algorithm implements collective communication based on at
  minimum two MPI\_Allgather calls. During the first call the data
  size is exchanged. During the subsequent calls, the data in
  exchanged in a fixed buffer size chunks. It does not include
  post-processing of the received data.
\end{description}

All algorithms provide an opportunity to measure communication and
total time costs in each communication step in a microsecond
resolution ({\scriptsize\#define TIME\_SAMPLE}). For the pairwise
algorithms the communication time costs of the send and the receive
MPI primitives ("stimes","rtimes") are recorded. The collective
algorithms produce only one file containing communication time costs
("ctimes"). The total time costs of each intercommunication interval
are saved to the "ttimes" file.

\end{appendices}
\clearpage \bibliographystyle{abbrvnat}
\bibliography{EBrocke_HPCreport}
\end{document}