\documentclass[a4paper,10pt]{article}
\usepackage{amsmath,amsfonts,amsthm,amssymb,graphicx, url}
\usepackage{listings}
\lstset{language=C}
\lstset{basicstyle=\small}
\newcommand{\bvec}[1]{\boldsymbol{#1}} % vectors in bold instead of with an arrow on top

% Title Page
\title{Scientific Computing Lab Assignments}
\author{Florian Speelman \& Jannis Teunissen}


\begin{document}
\maketitle
\section{Some general remarks}
We will refer to parallel processes as tasks, following MPI convention. When not stated otherwise, we assume the machine
to be homogeneous, so that we do our tests without load balancing. 
However, all but the first program we wrote do allow for load balancing.

Most problems discussed in this report can be related to a physical phenomenon,
which we will often do. However, we will not use physical units for the results, for they would just become
less general and possibly less clear.

Since nodes in clusters do not always have disk acces, we gather data from them during runtime if output is wanted.
For the scalability tests we turn off most output, because the sequential code and disk access will prevent high speedups and might
obfuscate interesting performance results (for example due to caching, communication or memory issues).

For `real' problems it would of course pay off to implement efficient intermediate data collection routines, but then
the typical runtime is several orders of magnitude larger, so that disk access latency is less of a burden.

We performed all scalability tests on a single node, for several reasons. It takes a lot more time to request multiple nodes,
which is unpractical especially if you want to use the interactive mode. Furthermore, the results will become more clear: every
extra tasks runs under the same conditions as the other ones\footnote{This is not true with multiple nodes: imagine
you run 3 tasks on 2 nodes, with every task communicating with two neighbours.
Then the single task will have to send and receive data from the other node. If you run 4 tasks
on 2 nodes you could have a situation where every task either receives or sends data from the other node.}.
 We could also have achieved this by running just one task per node, but
this seems inapproriate. Finally, the ow cluster (if available) so frequently gave rsh errors that we stopped using it.

This means we do the scalability tests without actual network traffic.
But whether you use Infiniband, 100mbit ethernet, an ADSL internet connection or a shared memory system,
communication between tasks will still have a typical set-up time and a typical bandwith. This is the important
property we use to derive time complexity formulas.
\newpage
\section{The matrix-vector product}
Our series of lab assigments started out with a relatively simple one: implement a matrix-vector product $\bvec{A}\bvec{x}=\bvec{y}$ in parallel using MPI.
We used a scenario where the vector $\bvec{x}$ and matrix $\bvec{A}$ were scattered over the tasks. A different type of
scattering will lead to different communication patterns, of which we will discuss the performance implications.
\subsection{Time complexity formula}
We assume here a stripwise row-division of the matrix $\bvec{A}$ and vector $\bvec{x}$, so that the first task has the first $n$
rows of $\bvec{A}$ and the first $n$ elements of $\bvec{x}$. To distinguish the local partial copies from the complete ones,
we will label them with a subscript $l$.
Several operations are needed to compute the resulting vector $\bvec{y}$ on every processor:
\begin{enumerate}
 \item Gather the vectors $\bvec{x_l}$ into a buffer $\bvec{x}$ on every task.
 \item Compute $\bvec{y_l}=\bvec{A_l}\bvec{x}$.
 \item Gather the vectors $\bvec{y_l}$ into a buffer $\bvec{y}$ on every task.
\end{enumerate}
Now we will make some assumptions about the cost of these operations. We model the `allgather' operation as every task sending its local
part to every other task, so that it takes a time
\[
T_\text{allgather} = p (\alpha \cdot \tfrac{N}{p} + \beta).  
\]
Where $\alpha$ is a constant related to network speed, $\beta$ is the network delay, $p$ is the number of tasks and $N$ is the size of the vector $\bvec{x}$.
The computation of the vector $\bvec{y_l}$ would require approximately $N\tfrac{N}{p}$ multiplications and additions, so that the total compute time is
\[
 T_\text{compute} = \gamma N^2/p,
\]
where $\gamma$ is a constant related to the processor's arithmetic performance. Using this model, the total time for computing the matrix vector product is:
\begin{equation}
 T = 2\alpha N + 2p\beta + \gamma N^2/p.
\label{matvecmodel}
\end{equation}

\subsection{Our MPI-implementation}
The implementation we wrote computes the matrix vector product in two ways, the first one is similar to the model discussed in the previous section.
The other one uses a stripwise column-division of the matrix. For the gather operation the function \verb|MPI_Allgatherv| is used, since the tasks may have a
different local size (either $N/p$ or $N/p + 1$ using integer arithmetic). When the matrix is shared column-wise over the tasks, there is no need
for a gather operation, but instead we need to use \verb|MPI_Allreduce| with a summation.

\subsection{Correctness and accuracy}
To check whether the method worked correctly, we set $A_{ij} = i\cdot N + j$ and $x_i = i$, where $i,j$ start at zero. Then
\[
 y_i = \frac{N(N - 0.5)(N-1)}{3} + i \frac{N^2 (N-1)}{2},
\]
so that we can easily check the correctness and accuracy of our method. For problem sizes that we could run on our own computers
there was no difference between the analytical solution and the numerical result. For very large problems that we ran on Lisa there
was a relative difference of the order of the machine precision, probably due to rounding error.
\subsection{Scalability tests}
We ran tests on the Lisa system, using one node and up to seven tasks. Looking at equation \eqref{matvecmodel} we expect different behaviour for
different problem sizes: when $N$ is very large the last term will dominate, so that $T\propto 1/p$. When $N$ is small, the second term will dominate,
so that $T\propto p$. We ran tests with $N = 10$, $N = 500$ and $N = 3\cdot10^4$. Since the communication delays in a Lisa node are subject to some random fluctuations,
and the measurement of very short wallclock runtimes leads to large errors, we did not expect a smooth curve for $N = 10$. Also note that since
a node has a shared memory architecture, there is no network communication. However, equation \eqref{matvecmodel} could still be valid, since memory
transactions are also characterizes by a delay and some throughput. See figures \ref{fig:matvec10} to \ref{fig:matvec30k} for
the results. For $N = 500$ we reach some intermediate state, and for $N = 3\cdot 10^4$ we almost have $1/p$ behaviour.

For larger problem sizes, there was no distinguishable performance difference between the stripwise column and row division method. For smaller problems
results were fluctuating quite a bit, but the row-wise division seemed to perform slightly better.

\begin{figure}
\begin{center}
\includegraphics[width = 10cm]{10.png}
\caption{Although the errors in these points are quite large, one could see a linear relation between runtime
and number of tasks.\label{fig:matvec10}}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
\includegraphics[width = 10cm]{500.png}
\caption{At first extra tasks give a clear speedup, later on the extra communication prevents any further speedup.\label{fig:matvec500}}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
\includegraphics[width = 10cm]{30k.png}
\caption{For this larger problem size the speedup is getting quite close to the optimal one for up to seven tasks.\label{fig:matvec30k}}
\end{center}
\end{figure}
\newpage
\section{The vibrating string}
For the second lab assignment we had to model a vibrating string,
of which the deflection $u(x,t)$ can be decribed by the following differential equation
\begin{equation}
 \frac{d^2 u(x,t)}{dt^2} = c^2 \frac{d^2 u(x,t)}{dx^2},
\label{wavediff}
\end{equation}
where $c$ is the wave velocity. Given initial conditions $u(x, 0) = f(x)$, $u'(x,0) = 0$ and boundary conditions $u(0,t) = u(1,t) = 0$ we can numerically
solve this differential equation for all $t$.
\subsection{Our MPI-implementation}
Maybe the best way to describe our program is by discussing the functions it uses in order of execution.
\begin{itemize}
 \item initPars() -- this function reads in parameters from the command line, using the library function
getopt(). At the first task, the parameters are stored in a struct. This struct is then broadcasted to the
other tasks. The function initPars() also sets up a ringlike communicator.
 \item waveSolver() -- depending on the parameters, a benchmark may be executed to give the tasks a weight
that is related to their arithmetic speed. The weight then determines how large a part of the spatial domain
each task will have to compute at every iteration. Then the function stepWave() is called.
 \item stepWave() -- here the explicit centered finite difference scheme is implemented. The function builds up
the solution at the next timestep until the endtime is reached. At certain intervals the deflection of the string
may be written to disk. At every iteration, first the halo cells are updated and then send to the neighbours.
\item writeWave() -- This function gathers every tasks' part of the string and stores it in a textfile at the root.
It can also be called to convert these textfiles to png images. Since this involves a lot of sequential execution, having a lot of output
will slow the program down a lot. When writing output, calcWaveError() is used to compare the results to the analytical solution.
\item calcWaveError() -- Given the current time and initial conditions, the current discrete approximation is compared to the analytical
solution.
\end{itemize}
For more information about the code it is probably most efficient to have at direct look at it.
The following command-line parameters can be given, run the program once to see the appropriate switches:
\begin{itemize}
\item\textbf{plucked} if true, the initial position of the wave is plucked, otherwise it's a sine function.
\item\textbf{fixedEnds} if true, the endpoints of the wave are never updated, so that they are fixed. Otherwise periodic boundary conditions apply.
\item\textbf{nOutputs} number of output files to write, equally spaced in time.
\item\textbf{imgOutput} if true, the output files are converted to png images.
\item\textbf{loadBalance} if true, do load balancing for the division of the spatial domain.
\item\textbf{dx} the size of the spatial timestep.
\item\textbf{dt} the size of the temporal timestep.
\item\textbf{c} the wave velocity.
\item\textbf{maxTime} the endtime of the simulation.
\item\textbf{sinPeriod} the period of the initial position sine function.
\item\textbf{location} the location of the initial pluck.
\item\textbf{amplitude} the amplitude of the pluck or the sine function.
\end{itemize}
Note that the function calcWaveError() only correctly calculates the analytical solution when the sine period is a fraction
$1/n$ for $n = 1,2,3,\dots$, so that the endpoints are fixed at zero.

\subsection{Analytical solution}
If we introduce new variables $v(x,t) = x + ct$ and $w(x,t) = x - ct$, we can write \eqref{wavediff} in the equivalent form
\[
 \frac{d^2 u(v,w)}{dt^2} = c^2 \frac{d^2 u(v,w)}{dx^2}.
\]
Writing out the partial derivatives, this leads to
\[
 \frac{d^2 u(v,w)}{dvdw} = 0,
\]
with a general solution of the form $u(v,w) = f(v) + g(w)$. Now if initial conditions are given on the real axis
\begin{align*}
 u(x,0) &= A(x)\\
 \frac{d u(x,t)}{dt}\Big|_{t=0} &= B(x),
\end{align*}
we solve for $f(v)$ and $g(w)$ to find:
\begin{equation}
 u(x,t) = \tfrac{1}{2}A(x+ct) + \tfrac{1}{2}A(x-ct) + \int_{x-ct}^{x+ct}B(y)dy.
\label{analyticalwave}
\end{equation}
For our string $B(x)$ is simply zero. Initial condition are not give on the real axis, but on the interval from 0 -- 1 with the endpoints
fixed at zero. This is equivalent to a situation where $A(x)$ is a periodic function with period two,
that is odd around $x=0$ (and because of the periodicity also around $x=1$). With such a choice of $A(x)$, it is obvious that
the endpoints stay fixed at zero, since $A(0+ct) = -A(0-ct)$ and $A(1+ct) = -A(1-ct)$.

With this relatively simple analytical solution one can check the correctness and accuracy of the numerical solution.

\subsection{Stability \& Accuracy}
The explicit centered finite difference scheme that we use
\begin{align*}
 u(x,t+\Delta t) &= 2 u(x,t) - u(x,t-\Delta t)\\
  &+\left(\frac{c\Delta t}{\Delta x}\right)^2 \left[u(x + \Delta x, t) - 2 u(x, t) + u(x - \Delta x, t)\right],
\end{align*}
is stable for $\Delta t < \Delta x / c$. When we use too large a timestep the error grows exponentially, quickly reaching
the maximum size representable by a 8-byte double.

The program we wrote can automatically plot the absolute error over time, and results typically look like figure \ref{fig:waveacc1}.
We expect to see periodicity in the error: when the velocity of the string is large the error will be larger than when it is almost standing still.

In our implementation we set $u(x, -\Delta t) = u(x,0)$,
since we need $u(x, -\Delta t)$ for the first step.
This leads to a numerical error at the start, the size of which we can estimate as 
\[
\text{initial error} = \Delta t\cdot\frac{d u(x,t)}{dt}\Big|_{t=-\Delta t/2}.
\]
However the scheme is still consistent for a sine wave as the initial position, since for $\Delta t \to 0$ the error also goes to zero.

For a plucked initial position the error over time is quite interesting, see figure \ref{fig:pluckacc1}.
There are two important reasons: in the analytical solution the velocity has a discontinuity and because the
solution spectrum for such an initial position consists of waves with many different wavenumbers. 
Some parts of this spectrum damp out quickly in our numerical scheme, depending on the size of $\Delta t$ and $\Delta x$.
Note that we do not expect the same periodicity here as with the sine wave, since some parts of the string always move
at the maximum velocity. However, at the turning points ($t = 0, \tfrac{1}{2}, 1, \tfrac{3}{2}, \dots$) 
there is a discontinuity in the velocity, which clearly leads to an increased error.

We also ran a little test to see how the error increased with $\Delta x$ for a constant $\Delta t$,
see figure \ref{fig:werrdx}.
\begin{figure}
\begin{center}
\includegraphics[width = 10cm]{waveerr-2-4.png}
\caption{
Absolute error over time for a timestep $\Delta t = 10^{-4}$ and spatial step $\Delta x = 10^{-2}$, where the initial
position was a sine wave with period and amplitude one. The maxima of the error curve seem to increase linearly.
The periodicity is to be expected, since at $t = \tfrac{1}{2}, \tfrac{3}{2}, \dots$ the velocity of the string is maximal,
so that the distance between the analytical and numerical solution is also at a maximum.
\label{fig:waveacc1}}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width = 10cm]{pluckerr-2-4.png}
\caption{
Absolute error over time for a timestep $\Delta t = 10^{-4}$ and spatial step $\Delta x = 10^{-2}$, where the initial
position was a pluck at $x = 0.5$ with amplitude one.
\label{fig:pluckacc1}}
\end{center}
\end{figure}

\begin{figure}
\begin{center}
\includegraphics[width = 10cm]{wavedxerr.png}
\caption{
For a constant $\Delta t = 10^{-6}$ we varied the spatial stepsize and looked at the error at $T = 1$.
The initial position was plucked at $x = 0.5$, with amplitude one. The error seems to grow proportional to $\Delta x^{0.8}$
for these parameters.
\label{fig:werrdx}}
\end{center}
\end{figure}
\subsection{Scalability tests}
We ran several tests on a Lisa node to see how scalable our code was, the results of which can be seen
in figure \ref{fig:wavetime}. Since the problem is divided in the spatial domain over the tasks,
the spatial stepsize $\Delta x$ is of great importance. A tiny spatial step size will lead to a higher computation cost per timestep, since there are more
points to be updated. And the higher the computational cost per iteration, the smaller is the fractional communication overhead.

There were some more effects though: sometimes adding extra tasks makes the problem size per task small enough so that it fits in the processors' cache.
This can lead to a superlinear speedup, and we consistently observed this when a task had about $1.5\cdot10^4$ points. Considering the three copies
of the array that are used in our numerical scheme this corresponds to about $350$ kbyte, using $8$-byte doubles. So probably this has to do with L1-cache
optimizations, since the L1 cache of the Xeon cores we use is 256 kbyte.

For a spatial stepsize $\Delta x = 10^{-6}$ we got quite different behaviour: with three or more tasks the results varied a lot and were not improving
as we expected them to. We have no real explanation for this, but maybe a shared memory bus between cores prevents any further speedup.

Finally, for a `large' spatial step size of $\Delta x = 10^{-2}$ we clearly see a decrease in performance upon using multiple tasks. Although
the communication via the tasks is quite fast here (they share memory), with so little points the fractional communication overhead is large.

\begin{figure}[!ht]
\begin{center}
\includegraphics[width = 10cm]{waveruntime.png}
\caption{This figure shows an almost optimal speedup that gets superlinear for a stepsize of $1.0\cdot10^{-5}$,
plotted along with the optimal speedup for comparison. Also shown are results for two other stepsizes,
see the text for discussion. Endtimes are chosen so that the total runtime for the different stepsizes became comparable.\label{fig:wavetime}}
\end{center}
\end{figure}

\section{The time dependent diffusion equation}
For the third assignment we had to implement a numerical solution for time dependent diffusion equation,
using two spatial dimensions:
\[
 \frac{dc(x,y,t)}{dt} = D\nabla^2 c(x,y,t).
\]
The domain we use is the square $0 \leq x,y \leq 1$, with the following boundary conditions:
\begin{align*}
c(x,1,t) = 1,\\
c(x,0,t) = 0.
\end{align*}
The rest of the square initially has a concentration zero, and periodic boundary conditions are applied to the $x$-direction
(this makes implementing the numerical scheme more straightforward).

\subsection{Time complexity formula}
First we will model the communication time per iteration for two different
spatial decompositions, one stripwise and the other blockwise.
In the strip-wise case every task sends $N$ points to its two neighbours.
Letting $\beta$ be a constant related to the set-up time of a connection
and $\alpha$ a constant representing connection speed,
we can calculate the communication time for a node as
\begin{equation*}
T_\text{comm} = 2\alpha N + 2 \beta.
\end{equation*}
This does not hold for the top and bottom row, but counting the time
that the slowest task needs suffices; this means
that this formula can be used when $p \geq 3$.

For block-wise decomposition, lets assume $\sqrt{p}$ is an integer
and $p \geq 9$, so there is a task that is not touching the edge.
The communication time is then given by
\begin{equation*}
T_\text{comm} = 4\alpha N/\sqrt{p} + 4 \beta.
\end{equation*}

The computation time per node will be the same for both decompositions,
given that they distribute the domain in equally large pieces.
\begin{equation*}
T_\text{computation} = \gamma \frac{N^2}{p},
\end{equation*}
where $\gamma$ is a constant related to the processor's arithmetic performance.

The best decomposition depends on $\alpha$ and $\beta$, but we will use the first
one, with row-wise strips. It is fairly easy to implement, has more symmetric communication
and less delay in the communication which is often desirable.
Combining the above, the total time per iteration is then given by
\[
 T = 2\alpha N + 2 \beta + \gamma \frac{N^2}{p}
\]


\subsection{Our MPI-implementation}
The structure of our implementation is very similar to that of the
wave program, so like before we will discuss the most important functions below.
For a sample of the graphical output, see figure \ref{fig:diffsample}.

\begin{itemize}
 \item initPars() -- this function reads in parameters from the command line, using the library function
getopt(). At the first task, the parameters are stored in a struct. This struct is then broadcasted to the
other tasks. The function initPars() also sets up a communicator.
 \item diffSolver() -- depending on the parameters, a benchmark may be executed to give the tasks a weight
that is related to their arithmetic speed. The weight then determines how large a part of the spatial domain
each task will have to compute at every iteration. Then the function initConcentration() is called, before calling stepDiff().
\item initConcentration() -- this function initializes the concentrations in the diffusion domain
to the boundary conditions.
\item stepDiff() -- here the explicit centered finite difference scheme is implemented. The function builds up
the solution at the next timestep until the endtime is reached. At certain intervals the concentrations
may be written to disk. At every iteration, first the halo cells are updated and then send to the neighbours.
\item writeDiff() -- This function gathers every task's part of the domain and stores it in a textfile at the root.
It can also be called to convert these textfiles to png images. Since this involves a lot of sequential execution, having a lot of output
will slow the program down a lot.
\item calcDiffErr() -- Computes the analytical solution for a given time and location.

\end{itemize}
Again, for more information about the code, it is probably best to have a look at it.
The following command-line parameters can be given, run the program once to see the appropriate switches:
\begin{itemize}
\item\textbf{nOutputs} number of output files to write, equally spaced in time.
\item\textbf{imgOutput} if true, the output files are converted to png images.
\item\textbf{loadBalance} if true, do load balancing for the division of the spatial domain.
\item\textbf{dx} the size of the spatial timestep, the domain has around $\frac{1}{{dx}^2}$ points.
\item\textbf{dt} the size of the temporal timestep.
\item\textbf{D} the diffusion constant.
\item\textbf{maxTime} the endtime of the simulation.
\end{itemize}

\begin{figure}
\begin{center}
\includegraphics[width = 10cm]{diff2dex.png}
\caption{
Sample of the graphical output that is created by our program.
\label{fig:diffsample}}
\end{center}
\end{figure}

\subsection{Stability \& accuracy}
The analytical solution for this problem is an infinite series
\[
 c(x,t) = \sum_{j=0}^{\infty}\operatorname{erfc}\left[\frac{1+2j-x}{2\sqrt{Dt}}\right]-\operatorname{erfc}\left[\frac{1+2j+x}{2\sqrt{Dt}}\right].
\]
Note that there is no dependence on the $y$-variable, because of the symmetry of the problem. With the above formula, it is easy to determine
both the correctness and accuracy of our method.
Furthermore, our numerical scheme
\begin{align*}
 c(x,y,t+\Delta t) =& c(x,y,t) + \frac{D\Delta t}{\Delta x^2}\big[c(x + \Delta x,y,t)+c(x-\Delta x,y,t) \\
&+ c(x,y+\Delta y, t) + c(x,y-\Delta y, t) - 4 c(x,y, t)\big],
\end{align*}
is stable when 
\[
\Delta t \leq \frac{\Delta x^2}{4D}.
\]
When we run our simulation with parameters so that it is unstable, we see the error grow exponentially which leads to all
kinds of strange behaviour (for example growing oscillations away from the steady state).

One important difference between our numerical solution and the analytical solution is that in the former the `flow of concentration'
is limited to one spatial step $\Delta x$ per timestep. In the analytical solution points are instantenuously influenced by their
surroundings. This, and the fact that the initial position is highly discontinuous will lead to an initial error that is quite large.
The steady state is the same for both methods, so the error should decrease to zero for $t\to\infty$.

From a dimensional analysis, one sees that the typical timescale of diffusion is given by $\tau_D=\Delta x^2/D$, so that we expect
the error to decrease after a time of order $\tau_D$, see figure \ref{fig:diff1}.

\begin{figure}
\begin{center}
\includegraphics[width = 10cm]{differr1.png}
\caption{
For this figure $\Delta x = 5\cdot 10^{-2}$, $\Delta t = 10^{-5}$ and $D = 1$ was used. With these parameters
$\tau_D = 2.5\cdot 10^{-3}$, and we see that the error starts to decrease quite smoothly for $t > \tau_D$.
\label{fig:diff1}}
\end{center}
\end{figure}

\subsection{Scalability}
We performed scalability tests on a single Lisa node, and the results were quite comparable to our previous ones,
see figure \ref{fig:diffscale}. For a large spatial stepsize $\Delta x$ adding extra tasks slows the computation down,
since the comminucation overhead per iteration is large. By making $\Delta x$ smaller, the computational cost per
iteration increases so that the comminucation overhead is decreased, and the speedup gets almost optimal.

However, when the stepsize becomes `too small', the speedup is quite suboptimal again. We think this might be
due to the shared memory bottleneck: when the local data does not fit in the cache anymore, all the
cores need to access the shared memory. Since the actual computations are just simple additions and
multiplications, this could lead to a bottleneck.
\begin{figure}
\begin{center}
\includegraphics[width = 10cm]{diff2dscale.png}
\caption{
Runtimes for three sets of parameters: First $\Delta x = 5.0\cdot 10^{-2}$, $\Delta t = 5.0\cdot 10^{-6}$,
with an endtime of $100$, second $\Delta x = 5.0\cdot 10^{-3}$, $\Delta t = 5.0\cdot 10^{-6}$ with an
endtime of one, and third $\Delta x = 5.0\cdot 10^{-4}$, $\Delta x = 5.0\cdot 10^{-8}$ with an endtime
of $10^{-4}$. See the text for discussion.
\label{fig:diffscale}}
\end{center}
\end{figure}

\section{Jacobi iteration for the time independent diffusion equation}
The diffusion equation we have solved in the previous section
eventually converges to a steady state. Often we are not concerned
with the time-dependent behavior, but only in the steady state.
In that case we can set the time-derivative in the original diffusion
equation to 0 to obtain the Laplace equation:
\[
\nabla^2 c(x,y,t) = 0
\]

Trying to solve this equation without giving importance to whether the
intermediate states have any physical meaning, might enable us to
use more efficient methods than the finite difference scheme used for
solving time-dependent diffusion. The method described in this chapter
follows quite naturally from the finite difference scheme; we
can derive it by setting $\Delta t$ to the highest time-step
that is still stable. This gives us the numerical scheme
\[
c(x,y,t+\Delta t) = \frac{1}{4}\big[c(x + \Delta x,y,t) +
c(x-\Delta x,y,t) + c(x,y+\Delta y, t) + c(x,y-\Delta y, t)\big].
\]

\subsection{Parallel stopping condition}
For this exercise the program was extended to use the \textbf{threshold} parameter,
the program stops when the highest absolute difference between
iterations is smaller than the threshold.

The check for convergence has to be done on every grid-point, so
it will have complexity $O(N^2)$ like the main calculation. Parallelization
is not hard; a node that computes the next value of a grid-point
also finds the highest absolute difference between iterations of its points.

The communication can be done by just communicating one number,
the highest absolute difference, to the others: a single call to \verb|MPI_Allreduce|.
This scales with $O(p)$ and will be very quickly be insignificant compared to
the computation of the stopping condition.

Let's calculate the optimal size of the interval, $i$, to use
when checking for convergence. Call the amount of
time needed for an iteration without the check $t$, the total
number of iterations before convergence $n$ and the time needed
for a convergence-check $s$. The average amount of excess iterations
we compute is $(i-1)/2$. Define $\zeta = s/t$,
then we get for the time
\begin{equation}
T = (n+\frac{i-1}{2})(1+\frac{\zeta}{i})t.
\end{equation}
We have to find the $i$ for which this $T$ is minimized.
Because of the shape of this function, we can find it
by setting the derivative to 0:
\begin{align*}
\frac{dT}{di} &= 0\\
-\frac{1}{2}-\frac{\zeta}{2i}+\frac{n\zeta}{i^2}+\frac{(i-1)\zeta}{i^2}&=0
\\ -\frac{i^2}{2}+\frac{\zeta i}{2} + \alpha(n-1) &= 0
\end{align*}
and this gives an optimal interval length of 
\begin{equation}
i=\frac{1}{2}(\sqrt{\zeta^2+8 \zeta (n-1)}+\zeta).
\end{equation}
The $n$ used in this equation is not known in general, we will
find in later experiments how it depends on the threshold.
Still, it is already clear that for optimal performance $i$ will
be probably be larger than 1, especially when computing up to
higher accuracies or on larger grids.

An iteration with a convergence check will take
\[
 T = 2\alpha N + 2 \beta + \gamma (1 + \zeta) \frac{N^2}{p} + \kappa p
\]
where $\kappa$ is a constant depending on the speed of the
reduce function.

We have ran some experiments for finding the $\zeta$ - 
which depends on our code, the compiler and the machine -
and to make sure scaling is $O(N^2)$ as expected. The program
is executed twice for a constant number of iterations, once
with a convergence check at every iteration and once without
any convergence checks. We then compare
the runtimes for varying $dx=1/N$, see the table below.
The data implies that in our case $\zeta \approx 1$.
\begin{center}
\begin{tabular}{|c | c | c|}
\hline
N& runtime with check& runtime without check\\ \hline
100&	3.19e-01&	1.68e-01\\
200&	1.23e+00&	6.21e-01\\
250&	1.89e+00&	9.88e-01\\
300&	2.73e+00&	1.37e+00\\
350&	3.73e+00&	1.88e+00\\
400&	4.91e+00&	2.47e+00\\ \hline
\end{tabular}
\end{center}

\subsection{Correctness and convergence}
When checking convergence behaviour the stopping condition was checked
every iteration; speed was no issue and checking more often gives
more accurate data.
In figure \ref{fig:jacthresh} we can see the relation between
the maximum change per iteration, the iteration number and the grid size.
In the limit for smaller thresholds, the number of iterations
to reach the stopping condition goes linear with the logarithm.
This implies that the size of the change between each iteration decays
exponentially, with a rate depending on the grid size.

\begin{figure}[!ht]
\begin{center}
\includegraphics[width = 10cm]{jac-threshold.png}
\caption{Number of iterations of the Jacobi method depending
on the threshold parameter for different grid sizes.
\label{fig:jacthresh}}
\end{center}
\end{figure}

For correctness we have manually checked the datafiles our program can
generate, in this case that is very easy because the solution is a linear
one.

\section{Gauss-Seidel iteration}
Again we want to solve the time-independent diffusion equation like
we did with the Jacobi iteration, but now with a different method:
Gauss-Seidel iteration. It is not much faster than the Jacobi iteration,
but it has advantages in memory usage and it is a stepping stone towards
the Successive Over Relaxation (SOR), a method that can converge much faster.
We will not analyse the method in this report, but it
would be very easy to change our program to use SOR.

For an iteration proceeding along the rows, the Gauss-Seidel iteration would
be written as
\begin{align*}
c(x,y,t+\Delta t) = &\frac{1}{4}\big[c(x + \Delta x,y,t)
c(x-\Delta x,y,t+ \Delta t)\\& + c(x,y+\Delta y, t) + c(x,y-\Delta y, t+ \Delta t)\big],
\end{align*}
but we will use a different ordering to be able to 
parallelize the method.

\subsection{Red-Black ordering and complexity}
The idea of Gauss-Seidel iteration is to use
a value whenever it is computed, but the problem is that the
locality of the Jacobi iteration is lost. We can solve this by
colouring the computational grid like a checkerboard, red and black.
First we compute the red points in parallel and then we can compute
all black points in parallel. This ordering introduces an
extra communication step within an iteration where we send all red points.
We have double the amount of communication events, but they only need
to send half the data per event.

We can adjust the earlier given time complexity to 
\[
 T = 2\alpha N + 4 \beta + \gamma (1 + \zeta) \frac{N^2}{p} + \kappa p\text{,}
\]
the extra time needed per iteration is just the set-up time for two
extra send operations.

\subsection{Our MPI-implementation}
The Gauss-Seidel iteration can be done in-place, so
the memory usage of the program is decreased.
This had as a consequence that instead of having
a seperate loop to check the convergence of
the local points, it is now done for each point seperately.
The division of rows between processes is carried over from
the Jacobi version of the program. This division will give some processes
an odd number of rows. No problem in principle, but we have
to be more careful with indices; we need to check whether our
first and last rows start with red or black.

For sending half the points \verb|MPI_Type_vector| is used to create
a vector type. This type is then used as datatype in the send and receive
routines. The total transfer time is the same as for the Jacobi iteration,
the only difference is the communication overhead, which is doubled.
To keep the code simple, we adjust the spatial step size slightly so that the
number of grid points is even (if necessary). For small stepsizes
the difference is negligible, but it leads to significantly cleaner code.

\subsection{Correctness and convergence}
Figure \ref{fig:gsthresh} shows the same plot as made previously for the Jacobi
iteration (figure \ref{fig:jacthresh}).
We can see that shape is very similar, and that
the main difference is a scaling factor. This was to be expected,
the effect of one Gauss-Seidel iteration is approximately the same
as two steps of the Jacobi iteration. The factor between them
is slightly less than two; this is because the larger steps of Gauss-Seidel also
increase the maximum absolute difference between steps.
The convergence graph will be scaled and
shifted to the left compared to the Jacobi graph.

\begin{figure}[!ht]
\begin{center}
\includegraphics[width = 10cm]{gs-threshold.png}
\caption{Number of iterations depending on the threshold parameter
for different grid sizes for the Gauss-Seidel iteration.
\label{fig:gsthresh}}
\end{center}
\end{figure}

Because of the simplicity of the solution we were again
able to quickly verify by hand that our implementation
of the Gauss-Seidel iteration converges to the correct values.

\subsection{Scalability tests}
The scalability results in figure \ref{fig:gsscaling} 
show the success of the re-ordering; the computation time
decreases with approximately $1/p$ when using a large grid. This speed-up
implies the problem is parallelized well.

The behavior for smaller grids is not shown here, but it will be dominated
by communication overhead, just like for the time-dependent program.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width = 10cm]{gs-scaling.png}
\caption{Gauss-Seidel scaling, showing behavior for two grid-sizes.
\label{fig:gsscaling}}
\end{center}
\end{figure}


\end{document}
