
% Some helpful macros:
%
%\insertfigure{example_figure.png}{Figure caption goes here.}
%\ref{example_figure.png}
%
%\insertsubfigurepair
%    {label}{Shared caption}
%    {filename1.png}{subcaption1}
%    {filename2.png}{subcaption2}
%\ref{label} \ref{filename1.png} \ref{filename2.png}
%
%\insertfigurepair
%    {filename1.png}{caption1}
%    {filename2.png}{caption2}
%\ref{filename1.png} \ref{filename2.png}
%
%\inserttable{tableLabel}{Table caption goes here.}{lcccccc}{
%    Trial & 1 & 2 & 3 \\
%    \hline
%    A & 11120739 & 11121824 & 11121825 \\
%    B & 11120739 & 11121824 & 11121825 \\
%    \hline
%}
%\ref{tableLabel}

The MPI implementation is done in main-mpi.f90. The goal here was to decompose the problem along the perturbation matrix. This was accomplished by dividing the number of perturbed models among the available processors. Although our test program is more than capable of calculating all the perturbed models in serial, the decomposition technique was introduced here as a proof of concept for real world problems that would actually require it (e.q. full reactor core models). 

Along with finding an efficient method for communicating the logs between processors, the best size of \textit{nbatch}, or number of histories per log, was also established. Unless otherwise stated all computations were performed for 1 million particle histories. All timed runs were performed three times on the flux cluster using sandybridge nodes. Three runs were used for timing purposes, and the median values were reported. 

\subsection{Model Decomposition}
The MPI program was written to work for any combination of processors and histories, \textit{NPS}, provided that there are at least as many models as number of processors. The models were divided among the processors by 
\begin{align} \label{eq:model_begin}
model\_begin = \lceil nmat/procs*rank \rceil + 1 \\
model\_end = \lceil nmodel/procs*(rank+1) \rceil \label{eq:model_end}
\end{align}
where $model\_begin$ and $model\_end$ are the bounds of models each processors is responsible for retracing. T

To ensure that every processor communicates equal number of times, the total number of retraces performed by all processors is fixed according to the size of the batch:
\begin{align}
	batch\_total = \lfloor NPS/procs/nbatch \rfloor.
\end{align}
For every processor, once $nbatch$ number of histories is traced, the processor retraces its log and the logs received from other processors for the models designated to it by Eqs. \ref{eq:model_begin}--\ref{eq:model_end}. 

\subsection{Allgather Method}

One of the simplest ways to implement the log exchange between processors is to call the intrinsic ALLGATHERV command after $nbatch$ number of histories. This way all the processors would receive logs from every other processor before performing the retrace step. 

However, if allgather is used then the receive buffer grows proportionally with the number of processors. As a result we expect this method to not be able to scale to large number of processors. To demonstrate this we compare using allgather with the superior ``ring" method described in Section \ref{sec:ring}. The problem parameters were $nbatch=100$ and $nmodel=100$. The timing results and speedup are shown in Tables \ref{tab:allgather_time}--\ref{tab:allgather_speed}. The strong scaling efficiency of two methods are shown in Figure \ref{fig:allgather_scale}. 

% Table generated by Excel2LaTeX from sheet 'allgather'
\begin{table}[htbp]
  \centering
  \caption{Time for the ring and allgather methods, with $nbatch=100$ and $nmodel=100$.}
    \begin{tabular}{|r|c|c|}
    \hline
    \textbf{procs \ method} & \textbf{ring } & \textbf{allgather} \\
    \hline
    \multicolumn{1}{|c|}{\textbf{1}} & 86.16 & 86.26 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{2}} & 42.48 & 44.44 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{4}} & 23.05 & 23.92 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{8}} & 12.95 & 13.31 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{16}} & 7.75  & 8.70 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{32}} & 5.59  & 11.74 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{64}} & 3.96  & 13.72 \\
    \hline
    \end{tabular}%
  \label{tab:allgather_time}%
\end{table}%

% Table generated by Excel2LaTeX from sheet 'allgather'
\begin{table}[htbp]
  \centering
  \caption{Speedup achieved for the ring and allgather method, with $nbatch=100$ and $nmodel=100$.}
    \begin{tabular}{|r|c|c|}
    \hline
    \textbf{procs \ method} & \textbf{ring } & \textbf{allgather} \\
    \hline
    \multicolumn{1}{|c|}{\textbf{2}} & 2.03  & 1.94 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{4}} & 3.71  & 3.61 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{8}} & 6.61  & 6.48 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{16}} & 11.04 & 9.92 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{32}} & 15.88 & 7.35 \\
    \hline
    \multicolumn{1}{|c|}{\textbf{64}} & 21.74 & 6.29 \\
    \hline
    \end{tabular}%
  \label{tab:allgather_speed}%
\end{table}%

\begin{figure}[h]
	\centering
    \includegraphics[width=0.65\textwidth]{allgather_scale.png}
    \caption{Strong scaling efficiency between the ring and allgather methods.}
    \label{fig:allgather_scale}
\end{figure}

As is shown the ring method scales much better than the allgather method. The total computational time actually increases for the allgather method beyond 16 processors. This shows that the communication burden associated with the increasing receive buffer begins to dominate any advantage gained from decomposing the problem among the processors. The ring method was used for the rest of the analysis performed. 

\subsection{Ring Method} \label{sec:ring}

After a processor traces $nbatch$ number of histories, it performs a non-blocking SEND of the $primary$ logs to the processor  $rank+1$ or the $root$ processor in the case of the last processor. This is followed by a corresponding non-blocking RECEIVE from processor $rank-1$ or $rank=procs-1$ in the case of the $root$ processor, into $buffer$ logs. Then a retrace calculation is performed using the $primary$ logs, followed by a WAIT statement which ensures the prior communications were completed. Finally the received $buffer$ logs are saved into the $primary$ logs, so they are passed on in the next iteration.

As a result, the very first retrace that each processor calculates is of the log it generated. After a processor is done retracing the log it is passed ``forward" to the next processor, and this is repeated to the total number of processors which ensures that every processors received every other processors logs. In this way the processors are connected in a ``ring" pattern and all the logs are passed on in a sort of merry-go-round. A visual description of the process is shown in Figure \ref{fig:mpi_comm}.

\begin{figure}[h]
	\centering
    \includegraphics[width=0.5\textwidth]{mpi_comm.png}
    \caption{The communication performed between $nbatch$ number of histories for the ring method.}
    \label{fig:mpi_comm}
\end{figure}

This approach is more efficient then the previously mentioned allgather method for two reasons. First, by performing a non-blocking send and receive before the retrace operation, the latency of sending these logs is masked.  Second, the processors only need to store one set of logs at a time, which makes for a lower memory footprint. 

\clearpage
\subsection{Batch Size}

For every problem there exists an optimal batch size which minimize dependency barriers and network overhead. In a Monte Carlo simulation the time required to calculate a single history (i.e. the number of collisions before escape or absorption) varies, as a result a batch size that is too small will cause other processors to wait for others to complete the calculation of their histories. As batch size increases this effect is minimized, however it comes at a cost of sending increasingly large logs which required greater network overhead. 

These competing effects were studied by running batch sizes from 1 to 1000, and for 1 to 32 processors. One hundred models ($nmat=nmodels=100$) were used for all cases. These timing results are shown in Table \ref{tab:mpi_nbatch}. The corresponding speedup is shown in Table \ref{tab:mpi_speed}. The efficiency for each batch size is shown in Figure \ref{fig:scale_nbatch}. 

% Table generated by Excel2LaTeX from sheet 'nps=1E6'
\begin{table}[htbp]
  \centering
  \caption{Comparison of run times for various number of processors and batch sizes.}
% Table generated by Excel2LaTeX from sheet 'nps=1E6'
\begin{tabular}{|r|c|c|c|c|}
\hline
\textbf{procs \ nbatch} & \textbf{1} & \textbf{10} & \textbf{100} & \textbf{1000} \\
\hline
\multicolumn{1}{|c|}{\textbf{1}} & 85.22 & 82.97 & 86.16 & 81.07 \\
\hline
\multicolumn{1}{|c|}{\textbf{2}} & 49.16 & 46.12 & 42.48 & 44.64 \\
\hline
\multicolumn{1}{|c|}{\textbf{4}} & 36.93 & 25.77 & 23.05 & 26.08 \\
\hline
\multicolumn{1}{|c|}{\textbf{8}} & 24.72 & 14.65 & 12.95 & 15.65 \\
\hline
\multicolumn{1}{|c|}{\textbf{16}} & 19.41 & 9.65  & 7.75  & 8.51 \\
\hline
\multicolumn{1}{|c|}{\textbf{32}} & 18.08 & 7.45  & 5.40  & 8.94 \\
\hline
\end{tabular}%
  \label{tab:mpi_nbatch}%
\end{table}%

% Table generated by Excel2LaTeX from sheet 'nps=1E6'
\begin{table}[htbp]
  \centering
  \caption{Speedup achieved for different sized batches.}
    \begin{tabular}{|c|c|c|c|c|}
    \hline
    \textbf{procs \textbackslash  nbatch} & \textbf{1} & \textbf{10} & \textbf{100} & \textbf{1000} \\
    \hline
    \textbf{2} & 1.72  & 1.80  & 2.03  & 1.82 \\
    \hline
    \textbf{4} & 2.28  & 3.22  & 3.74  & 3.11 \\
    \hline
    \textbf{8} & 3.47  & 5.66  & 6.65  & 5.18 \\
    \hline
    \textbf{16} & 4.38  & 8.60  & 11.12 & 9.53 \\
    \hline
    \textbf{32} & 4.70  & 11.14 & 15.96 & 9.06 \\
    \hline
    \end{tabular}%
  \label{tab:mpi_speed}%
\end{table}%

\begin{figure}[h!]
	\centering
    \includegraphics[width=0.65\textwidth]{scale_nbatch.png}
    \caption{Strong scaling efficiency as a function of batch size.}
    \label{fig:scale_nbatch}
\end{figure}

As can be seen in Figure \ref{fig:scale_nbatch}, the efficiency decreases as the number of processing units increases. This stems from the larger communication burden required as processors are added, and to some extent the diminished capacity to mask latency when the number of models per processors shrinks, and therefore the retrace calculations itself is shorter compared with the communication time. It is also evident that as batch size increases the barrier dependency, caused by the uneven calculation times per history, decreases and efficiency increases. However, with batch size of 1000, the cost of network overhead from sending larger log files begins to dominate, and the efficiency decreases again. Batch size of 100 was shown to be the most optimal of the ones tested.  

\subsection{Number of Models}

Increasing the number of materials, and thus the number of perturbed models, lengthens the computation time for both the trace and retrace steps. To study the scaling effects of varying number of models, calculations were performed for 50, 100 and 200 models. The batch size was set to 100, previously determined optimal size. The speedup results are shown in Table \ref{tab:nmodel_speed} and scaling efficiency is shown in Figure \ref{fig:nmodel_scale}.

% Table generated by Excel2LaTeX from sheet 'nmat'
\begin{table}[htbp]
  \centering
  \caption{Speed up for different number of models, with $nbatch=100$.}
\begin{tabular}{|r|c|c|c|}
\hline
\textbf{procs \ nmodel} & \textbf{50} & \textbf{100} & \textbf{200} \\
\hline
\multicolumn{1}{|c|}{\textbf{2}} & 1.93  & 2.03  & 1.92 \\
\hline
\multicolumn{1}{|c|}{\textbf{4}} & 3.57  & 3.71  & 3.70 \\
\hline
\multicolumn{1}{|c|}{\textbf{8}} & 6.03  & 6.61  & 6.92 \\
\hline
\multicolumn{1}{|c|}{\textbf{16}} & 9.46  & 11.04 & 12.08 \\
\hline
\multicolumn{1}{|c|}{\textbf{32}} & 12.76 & 15.88 & 19.48 \\
\hline
\end{tabular}%
  \label{tab:nmodel_speed}%
\end{table}%

\begin{figure}[h]
	\centering
    \includegraphics[width=0.65\textwidth]{nmodel_scale.png}
    \caption{Strong scaling efficiency as a function of number of models, with $nbatch=100$.}
    \label{fig:nmodel_scale}
\end{figure}

The efficiency increased with the increasing number of models. This suggests that the burden larger packet of logs between processors is outweighed by better latency masking achieved with longer retrace steps. Since there are more perturbations for each processor to calculate, the efficiency remains higher as more processors are used to divide up those models. In other words, the load balancing between the processors improves with increasing number of models. The benefit of having longer retrace steps in masking network latency is also observed. 

\subsection{Changing NPS}

Since increasing or decreasing the total number of histories run doesn't change the relative time spent tracing and retracing, we expect it to have very little impact on efficiency. As long as the batch size is fixed we expect the same efficiency as NPS varies. Two cases were compered, one shown in previous sections for $NPS=1E6$ and a second with $NPS=1E7$. The batch size was kept at the optimal 100 and the number of models at 100. The speedup results are shown in Table \ref{tab:NPS_speed} and scaling efficiency is shown in Figure \ref{fig:NPS_scale}.

% Table generated by Excel2LaTeX from sheet 'nps=1E7'
\begin{table}[htbp]
  \centering
  \caption{Speed up for different number of histories, with $nbatch=100$ and $nmodel=100$.}
    \begin{tabular}{|c|c|c|}
    \hline
    \textbf{procs \ nps} & \textbf{1E6} & \textbf{1E7} \\
    \hline
    \textbf{2} & 2.03  & 1.87 \\
    \hline
    \textbf{4} & 3.71  & 3.61 \\
    \hline
    \textbf{8} & 6.61  & 6.67 \\
    \hline
    \textbf{16} & 11.04 & 11.31 \\
    \hline
    \textbf{32} & 15.88 & 17.67 \\
    \hline
    \end{tabular}%
  \label{tab:NPS_speed}%
\end{table}%

\begin{figure}[h]
	\centering
    \includegraphics[width=0.65\textwidth]{NPS_scale.png}
    \caption{Speed up for different number of histories, with $nbatch=100$ and $nmodel=100$.}
    \label{fig:NPS_scale}
\end{figure}

As expected, changing the NPS does not effect the scaling efficiency of our method. This makes sense since NPS does not effect the relative time spent between trace and retrace steps. 

